Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- 20241217/1802.05995v3.json +0 -0
- 20241217/2204.14067v3.json +0 -0
- 20241217/2205.10691v2.json +71 -0
- 20241217/2208.07442v2.json +110 -0
- 20241217/2302.00667v3.json +0 -0
- 20241217/2302.13292v2.json +0 -0
- 20241217/2303.08554v2.json +0 -0
- 20241217/2308.10062v7.json +160 -0
- 20241217/2309.10426v4.json +202 -0
- 20241217/2310.00074v3.json +0 -0
- 20241217/2311.02691v2.json +276 -0
- 20241217/2311.07889v2.json +239 -0
- 20241217/2311.14975v2.json +0 -0
- 20241217/2311.16900v2.json +0 -0
- 20241217/2312.16476v6.json +666 -0
- 20241217/2401.15713v3.json +537 -0
- 20241217/2402.09527v11.json +0 -0
- 20241217/2402.13532v2.json +0 -0
- 20241217/2402.13773v3.json +0 -0
- 20241217/2402.18264v2.json +0 -0
- 20241217/2403.10276v2.json +740 -0
- 20241217/2403.13680v4.json +0 -0
- 20241217/2403.15698v3.json +0 -0
- 20241217/2404.02877v4.json +0 -0
- 20241217/2404.06825v2.json +0 -0
- 20241217/2405.08359v2.json +0 -0
- 20241217/2405.14877v2.json +124 -0
- 20241217/2405.17812v2.json +179 -0
- 20241217/2406.04777v2.json +0 -0
- 20241217/2406.06342v3.json +0 -0
- 20241217/2406.08270v2.json +0 -0
- 20241217/2406.08689v3.json +191 -0
- 20241217/2406.10359v2.json +120 -0
- 20241217/2406.10984v3.json +0 -0
- 20241217/2406.11497v3.json +0 -0
- 20241217/2406.19525v2.json +60 -0
- 20241217/2407.03384v3.json +383 -0
- 20241217/2407.04368v2.json +444 -0
- 20241217/2407.16424v2.json +0 -0
- 20241217/2407.17418v2.json +0 -0
- 20241217/2408.01639v2.json +448 -0
- 20241217/2408.02960v2.json +403 -0
- 20241217/2408.04662v2.json +479 -0
- 20241217/2408.13854v2.json +0 -0
- 20241217/2409.09739v2.json +0 -0
- 20241217/2409.09777v4.json +0 -0
- 20241217/2409.10033v3.json +0 -0
- 20241217/2409.11404v3.json +0 -0
- 20241217/2409.12468v2.json +0 -0
- 20241217/2409.13474v3.json +0 -0
20241217/1802.05995v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2204.14067v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2205.10691v2.json
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Producing Histopathology Phantom Images using Generative Adversarial Networks to improve Tumor Detection",
|
| 3 |
+
"abstract": "Advance in medical imaging is an important part in deep learning research. One of the goals of computer vision is development of a holistic, comprehensive model which can identify tumors from histology slides obtained via biopsies. A major problem that stands in the way is lack of data for a few cancer-types. In this paper, we ascertain that data augmentation using GANs can be a viable solution to reduce the unevenness in the distribution of different cancer types in our dataset. Our demonstration showed that a dataset augmented to a 50 increase causes an increase in tumor detection from 80 to 87.5.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Cancer treatment used to be based on \u2018benign\u2019 or \u2018malignant\u2019 before. But as oncology flourished and the term \u2018cancer\u2019 enveloped over 300 different tumor types, identification of the characteristic morphology for each tumor became critical. Cancer cannot be conclusively diagnosed without a biopsy. Only after a complete diagnosis done by a surgical pathologist can a doctor develop a plan for treatment. A pathologist\u2019s analysis entails details of the type and origin of tumor, level of anaplasia, level of invasion, the numbers of lymph nodes with and without the tumor, enzyme activity, ploidy etc. This depends on the type of cancer. It requires a gross and introspective examination of structure, careful coordination among different specialisations (cytological pathology, clinical pathology, surgical pathology), and close collaboration with the caring clinician to synthesise information and create a pathology report. [connolly2003role] [leong2011changing]\nDue to an increasing workload in the field of cancer related diseases along with a decrease in the number of pathologists, automated assistance will be of great significance in the near future. [petriceks2018trends] [329a9967fc334d9caa48efc998bd6d16] This is where Deep Learning takes stage. We can support a pathologist\u2019s daily routine by facilitating clinical practice with computer-based processing and image analysis.\nDiagnosis from biopsy can be aided using Deep Learning models. For example, deep learning models can identify cancerous tissue against non-cancerous tissue, determine low level features such as percentage tumor area, percentage of cells in mitotic phases or the presence of hormone receptors. It is also possible for more sophisticated models to further identify high level features described previously such as anaplasia level, level of invasion or enzyme activity.\nIn some areas computation is more effective that current manual methods. For example, counting positively and negatively stained cells under Immunohistochemical Staining (IHC). To provide an IHC interpretation, pathologists provide estimates of positive/negative stained cells which suffer from poor reproducibility. This is something which can easily be automated. [4]\nWhole-slide scanners can digitise entire histology slides without much effort. These can generate vast amounts of digital data which opens up avenues for training Deep Learning models to fulfil analytical tasks in oncopathology. [farahani2015whole] One such task is tumor detection, a process which, when automated, would greatly increase the efficiency of pathologists. Researchers have focused on the identification particular tumor types [abdel2016breast][araujo2017classification][bejnordi2018using][coudray2018classification], but a comprehensive, fully realised model would be capable of identifying any tumor type with high accuracy. This kind of model requires large diverse datasets for every cancer type. Immediately a problem rises as hospitals do not receive cancer patients with a uniform distribution of cancer types and thus cannot collect enough data for rare cancer types.\nIn this paper, we test the functionality of Generative Adversarial Networks (GAN) as a reasonable solution to augment data of rare cancers. Our goal is to test whether data augmentation using GANs can improve tumor detection models.\nDue to an increasing workload in the field of cancer-related diseases along with a decrease in the number of pathologists, automated assistance will be of great significance in the near future. [18] [16] This is where Deep Learning takes the stage. We can support a pathologist\u2019s daily routine by facilitating clinical practice with computer-based processing and image analysis.\nDiagnosis from a biopsy can be aided using Deep Learning models. For example, deep learning models can identify cancerous tissue against non-cancerous tissue, determine low-level features such as percentage tumor area, percentage of cells in mitotic phases, or the presence of hormone receptors. It is also possible for more sophisticated models to further identify high-level features described previously such as anaplasia level, level of invasion, or enzyme activity.\nIn some areas, computation is more effective than the current manual methods. For example, counting positively and negatively stained cells under Immunohistochemical Staining (IHC). To provide an IHC interpretation, pathologists provide estimates of positive/negative stained cells that suffer from poor reproducibility. This is something that can easily be automated. [4]\nWhole-slide scanners can digitize entire histology slides without much effort. These can generate vast amounts of digital data which opens up avenues for training Deep Learning models to fulfill analytical tasks in oncopathology. [9] One such task is tumor detection, a process which, when automated, would greatly increase the efficiency of pathologists. Researchers have focused on the identification of particular tumor types [1][2][3][7], but a comprehensive, fully realized model would be capable of identifying any tumor type with high accuracy. This kind of model requires large diverse datasets for every cancer type. Immediately a problem rises as hospitals do not receive cancer patients with a uniform distribution of cancer types and thus cannot collect enough data for rare cancer types."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Background and Related Work",
|
| 15 |
+
"text": "A wide range of image augmentation techniques have been used in machine learning research, including flipping, rotating, shearing, cropping, zooming in/out, changing brightness, perturbing, texture transfer, style transfer , CNN based approaches, and GANs. [mikolajczyk2018data]\nThe original GAN structure was developed by (Goodfellow et al.) [goodfellow2014generative]. Since then, many alterations have been made to suit different purposes. During the development of PathologyGAN, (Shaban et al.)[quiros2019pathology] included a mapping layer before the generator, which structures the unstructured latent space, controlling the features produced by the generator. (Huo et al.) [huo2018adversarial] removes the generator entirely and adds an encoder-decoder structure in its stead, allowing for images to be translated from one domain to another. The application of this in Medical Imaging is Image Translation between MRIs, CT scans and PET scans. A similar effect can be gained by CycleGANs, which remove the discriminator and instead calculate Cycle Consistency Loss from two generators attempting to generate images from domain X1 to domain X2 and vice versa (Zhu et al.) [xu2014weakly]. cGANs add classification into the structure of GANs, so generators can produce features based on classification and discriminators can classify the training dataset.\nSince the genesis of GANs, researchers have had difficulty evaluating these models. Many different metrics such as Frechet Inception Distance (FID) [heusel2017gans], Maximum Mean Discrepancy (MMD)[gretton2012kernel], 1-Nearest Neighbor classifier (1-NN) [lopez2016revisiting][gretton2012kernel], Kernel Inception Distance (KID) (Binkowski et al., 2018)[binkowski2018demystifying], and studies such as (Xu et al., 2018; Barratt and Sharma, 2018)\u2019s work [xu2014weakly] have described their advantages and disadvantages.\nA wide range of image augmentation techniques have been used in machine learning research, including flipping, rotating, shearing, cropping, zooming in/out, changing brightness, perturbing, texture transfer, style transfer, CNN based approaches, and GANs. [17]\nThe original GAN structure was developed by (Goodfellow et al.). Since then, many alterations have been made to suit different purposes. During the development of PathologyGAN, (Shaban et al.)[19] included a mapping layer before the generator, which structures the unstructured latent space, controlling the features produced by the generator. (Huo et al.) [13] removes the generator entirely and adds an encoder-decoder structure in its stead, allowing for images to be translated from one domain to another. The application of this in Medical Imaging is Image Translation between MRIs, CT scans, and PET scans. A similar effect can be gained by CycleGANs, which remove the discriminator and instead calculate Cycle Consistency Loss from two generators attempting to generate images from domain X1 to domain X2 and vice versa (Zhu et al.) [22]. cGANs add classification into the structure of GANs, so generators can produce features based on classification and discriminators can classify the training dataset.\nSince the genesis of GANs, researchers have had difficulty evaluating these models. Many different metrics such as Frechet Inception Distance (FID) [12], Maximum Mean Discrepancy (MMD)[10], the 1-Nearest Neighbor classifier (1-NN) [15][10], Kernel Inception Distance (KID) (Binkowski et al., 2018)[4], and studies such as (Xu et al., 2018; Barratt and Sharma, 2018)\u2019s work [22] have described their advantages and disadvantages.\nThe attitude of machine learning researchers for digital pathology is to build classifiers which achieve pathologist-level diagnoses for a few cancer types. (Esteva et al., 2017; Wei et al., 2019; Han et al., 2017)[estava2017dermatologist][han2017breast] The primary goal here is to aid decision by computer-human interaction. (Cai et al., 2019)[cai2019human]\nThere has also recently been interest in utilising GANs for digitised staining (Rana et al., 2018; Xu et al., 2019)[rana2018computational] (Ghazvinian Zanjani et al., 2018) [zanjani2018stain], phantom image generation (Senaras et al. 2018) [senaras2018optimized] and nuclei segmentation.\nThe attitude of machine learning researchers for digital pathology is to build classifiers that achieve pathologist-level diagnoses for a few cancer types. (Esteva et al., 2017; Wei et al., 2019; Han et al., 2017)[8][11] The primary goal here is to aid decision by computer-human interaction. (Cai et al., 2019)[5] There has also recently been interest in utilizing GANs for digitized staining (Rana et al., 2018; Xu et al., 2019)[20] (Ghazvinian Zanjani et al., 2018), phantom image generation (Senaras et al. 2018) [21] and nuclei segmentation."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Dataset",
|
| 21 |
+
"text": "To train our model, a HE Breast Cancer dataset was selected. The dataset is derived from 162 whole-slide images leading to 277,524 50x50 patches of images. Of these, we used 3324 non-tumor patches and 4289 tumorous patches to train the GAN and Convolutional Network. These patches were selected randomly from the whole dataset."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Proposed Architecture",
|
| 27 |
+
"text": "###figure_1### To test the improvement in tumor detection with data augmentation, we obtain tumor-detection accuracy twice. Once, with the original dataset, and once augmented with generated images. You can see the overarching structure of networks in the Figure 1."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "Generative Adversarial Network",
|
| 33 |
+
"text": "Generative Adversarial Networks contain two submodels: the generator (G) and the discriminator (D). The generator model takes in a random vector, z as input, and outputs fake histology images, . z is purely random noise, based on the distribution p(z) which, for simplicity, we chose as a uniform distribution. is going to be trained to be similar to real images , drawn from the real data .\nThe input to D is either or . The output of D, is a single value indicating whether the sample is \u2018real\u2019 or \u2018fake\u2019. Optionally, Discriminator loss and Generator loss are also output when D and G respectively are being trained. After successful training, generated samples form a distribution , which is approximately the distribution of real images .\nThe discriminator and generator trained using Adam Optimizer with . While the discriminator\u2019s goal is to identify which image is real, the generator goal is to confuse the discriminator by producing realistic images.\nThe discriminator\u2019s loss function is output from itself as it is used in training to improve accuracy at identifying real images. However, the generator\u2019s loss function is also output from the discriminator since its goal is to essentially fool the discriminator by producing realistic images. The generator\u2019s training is complete once the discriminator cannot consistently identify fake images i.e. the probability that a generated image is classified as \u2018fake\u2019 is 50. Then, either the discriminator is trained to improve at identifying fake images or the GAN is considered to be fully trained. The training goals of D and G can be expressed as:\nOnce the GAN is trained and it synthesises phantom images, the difference between the two distributions and is calculated as Frechet Inception Distance (FID). This characterises the effectiveness of the GAN in symbolising the original dataset when generating images."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.2",
|
| 37 |
+
"parent_section_id": "4",
|
| 38 |
+
"section_name": "Convolutional Neural Network",
|
| 39 |
+
"text": "The Convolutional Network is utilised two times, once with the original dataset and once with the augmented dataset . Batches of 100 images are input into into the model, and then feature detection maps are utilised to output a classification of each image as Cancerous and Non-Cancerous. We use the Adagrad Optimizer to train this model."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Results",
|
| 45 |
+
"text": "Results can be seen in Table 1. With a FID of 35.495, the GAN can produce a reliable distribution of synthesised images for the H&E Invasive Ductal Carcinoma Dataset. Table 1 also represents the percentage increase percentage accuracy. As the dataset was augmented by 50, tumor detection accuracy increased by 8.75. Hence, we can state that GANs can cause a percentage increase accuracy of tumor detection models.\nHowever, there are limitations in the proposed method. Out testing happened on a single dataset, which has been derived from a diverse selection of data. This is not representative of the type of data that would be obtained for rare cancer types, as subjects would be scarce.\nIn future research, we anticipate that a wide variety of datasets would be tested with GANs for increase in tumour detection accuracy, or other models executing pathology analysis. We also expect different types of GANs such as BigGAN, StyleGAN or PathologyGAN to be used to produce images since these provide desired advantages such as high resolution or structured histology images."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5.1",
|
| 49 |
+
"parent_section_id": "5",
|
| 50 |
+
"section_name": "Conclusions",
|
| 51 |
+
"text": "This paper contemplates the problem of inconsistency in data collection for rare cancers and its effects on future deep learning research. The solution proposed is to use Generative Adversarial Networks, which can generate fake images from a small dataset. Our demonstration proves that GANs can be effective as a data augmentation strategy for Deep Learning research in Digital Pathology. Looking forward, GANs can serve as a crucial factor to equalize the distribution of different cancer-types when developing holistic Deep Learning Models."
|
| 52 |
+
}
|
| 53 |
+
],
|
| 54 |
+
"appendix": [],
|
| 55 |
+
"tables": {
|
| 56 |
+
"1": {
|
| 57 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S5.T1.4.5.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr ltx_border_t\" id=\"S5.T1.4.5.1.2\">H&E Breast Cancer IDC Tissue</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.6.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_ll ltx_border_r\" id=\"S5.T1.4.6.2.1\">Frechet Inception Distance</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr\" id=\"S5.T1.4.6.2.2\">35.495</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r\" id=\"S5.T1.1.1.2\">Augmented Dataset Percentage Increase</th>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.1.1.1\">50.00\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r\" id=\"S5.T1.2.2.2\">Original Convolutional Network Accuracy</th>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.2.2.1\">80.00\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r\" id=\"S5.T1.3.3.2\">Augmented Convolutional Network Accuracy</th>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S5.T1.3.3.1\">87.00\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_ll ltx_border_r\" id=\"S5.T1.4.4.2\">Convolutional Network Accuracy Percentage Increase</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S5.T1.4.4.1\">08.75\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The augmented dataset improves convolutional accuracy by 8.75</figcaption>\n</figure>",
|
| 58 |
+
"capture": "Table 1: The augmented dataset improves convolutional accuracy by 8.75"
|
| 59 |
+
}
|
| 60 |
+
},
|
| 61 |
+
"image_paths": {
|
| 62 |
+
"1": {
|
| 63 |
+
"figure_path": "2205.10691v2_figure_1.png",
|
| 64 |
+
"caption": "Figure 1: Overarching Structure: The Original Dataset Fosubscript\ud835\udc39\ud835\udc5cF_{o}italic_F start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT is input into the Convolutional Network and the Augmented Dataset Fasubscript\ud835\udc39\ud835\udc4eF_{a}italic_F start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT is input into the Convolutional Network",
|
| 65 |
+
"url": "http://arxiv.org/html/2205.10691v2/extracted/6065039/dataset-graphic.png"
|
| 66 |
+
}
|
| 67 |
+
},
|
| 68 |
+
"validation": true,
|
| 69 |
+
"references": [],
|
| 70 |
+
"url": "http://arxiv.org/html/2205.10691v2"
|
| 71 |
+
}
|
20241217/2208.07442v2.json
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Viability of Robot-supported Flipped Classes in English for Medical Use Reading Comprehension",
|
| 3 |
+
"abstract": "This study delved into the viability of Robot-supported flipped classes in English for Medical Purposes reading comprehension. In a 16-session course, the reading comprehension and then workspace performance of 444 students, with Commercially-Off-The-Shelf and Self-Generated robot flipped classes were compared. The results indicated that the flipped classes brought about a good instructional-learning ambience in postsecondary education for English for Medical Purposes (EMP) reading comprehension and adopting proactive approach for workspace performance. In tandem, the Mixed Effect Model revealed that student participation in the self-generated robot-supported flipped classes yielded a larger effect size (+17.6%) than Commercially-Off-The-Shelf robot-supported flipped classes. Analyses produced five contributing moderators of EMP reading comprehension and workspace performance: reading proficiency, attitude, manner of practicing, as well as student and teacher role.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Researchers [1 ###reference_b1###], [2 ###reference_b2###] state that heavy dependency on less flexible conventional approaches for teaching English for Specific Purposes (ESP) is the major cause of failure in teaching ESP reading comprehension. Sherwood in [3 ###reference_b3###] contends, \u201dwhat is now required [in ESP instructional-learning contexts] is a consideration of the call for more transferable skills in the light of contemporary figures on the employment of graduates\u201d. These results lead to the importance of providing hands-on lessons to actively involve the students in the teaching process.\nESP courses follow two major aims of preparing collegiate students for their academic life (viz., English for Specific and Academic Purposes or ESAP) and future career in post-academic contexts (viz., English for Specific and Occupational Pur-poses or ESOP), hence, current and future working conditions demand that teaching ESP reading should not be restricted only to the classroom [4 ###reference_b4###]. Students are in need to not only understand the academic materials, but also to communicate with their cohorts in the international working milieus [5 ###reference_b5###].\nAlong these lines, educational technology (EdTech) devotees are looking to what is to come [6 ###reference_b6###]. EdTech has changed the way individuals access information, and communicate. EdTech-supported language education has brought about rich opportunities for communication and efforts among students; this way, EdTech-supported English language education has been successful in increasing the motivation of students to develop their understanding. It is increasingly clear that EdTech represents Language for Specific Purposes (LSP) more than a helpful realia.\nRobots [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###] functioning as teaching assistants present a chance to enrich education [14 ###reference_b14###, 15 ###reference_b15###]. Researchers have suggested that robots can act as teaching assistants, learning companions, or learning tools to enhance the academic setting [16 ###reference_b16###, 17 ###reference_b17###]. A learning framework supported by robot teaching assistants provides tailored assistance during instructor-guided lessons, aiding in the understanding of difficult subject matter [18 ###reference_b18###]. Furthermore, this approach not only boosts students\u2019 academic outcomes but also positively impacts their attitudes and engagement in learning [19 ###reference_b19###].\nDespite the influence that robots have continued to exert on the LSP, most studies in RBLE have only been carried out in a small number of areas. This lack is even more profound when it comes to postsecondary education. This study intended to examine the viability of robot-supported flipped classes in teaching ESP reading comprehension (with special reference to Medical English, namely EMP), and to identify the factors that are important to employing robot-supported flipped classes of postsecondary education."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Method",
|
| 15 |
+
"text": "The study was done in two phases: in the first phase we were focused on designing and manufacturing a proper robot for the current study, and in the second phase we arrange a questionnaire survey on students in robot-supported flipped classes."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A \u201dSafir\u201d Robot",
|
| 21 |
+
"text": "A new robot was designed and manufactured for the planned robot-supported classes as shown in Fig. 1 ###reference_###. The robot was a 1 meter height humanoid mobile robot. It has 10 Degrees of Freedom (DoFs): 2 in base, one in neck, one in left hand and 6 in the right hand. These DoFs let robot to move in the class on its wheels; nodding using its head; raising both of its 3D printed hands [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###]; pointing and making fist using its right hand with its 2 DoFs cable driven fingers. The robot also has an LCD Face and a speaker which let it to read aloud preprogrammed texts and show the proper lip motions.\n###figure_1### The main processor of Safir is a Raspberry Pi board. A python program was written for Safir to allow the user to control it using a remote keyboard [30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###, 38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###]. It could play a recorded audio of a reading, moving the robot\u2019s lips and moving robots hands, fingers and head based on a YAML file. This way we could store multiple scenarios for the classes."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B The Survey",
|
| 27 |
+
"text": "In the academic year 2017-2018, a sample of 444 students (140 males and 304 fe-males from the disciplines of Medical Library, Nursing, Health Information Technology (HIT), Nutritional Sciences, and Pharmacy) from as many as 463 students of Medical University of Isfahan was selected. They were in EMP courses. Further, four TEFL major students who took compulsory course of Materials Development were randomly selected to be included as members of the self-generated circles.\nTo identify their levels of proficiency, they took 30 items of Test of English as a Foreign Language (TOEFL)-like reading (5 passages each with 6 multiple-choice items). The test enjoyed the reliability of r=.79 as well as content and face validity.\nTo practice the materials under the surveillance of subject-area and English teachers, the students were randomly assigned to two sets of Commercially-Of-The-Shelf- (n=340) and self-generated- (n=104) AR-supported activities. Students in the Commercially-Of-The-Shelf set were further randomly assigned to either individual (n=220) or collective group (n=120). As for the collective practicing, students were divided into three-member circles. Students in the self-generated set were randomly divided into the monodisciplinary (n=96) and interdisciplinary groups. A shot of the interdisciplinary circles is shown in Fig. 2 ###reference_###.\n###figure_2### As to developing self-generated activities in monodisciplinary group circles, the students were invited to complete 64 templates. The details about the participants are summarized in Table I ###reference_###. In the table, please note that Subject is equal to subject-area teacher; to explore the degree of success that might be achieved by the teachers, while half of the participants were randomly assigned to classes that were conducted by an English teacher, the other half assigned to classes that were conducted by a subject-area teacher.\n###figure_3### This complementarity study with full factorial design was conducted as follows:\nStep I: Initially, the students were given the Google form of a Persian attitude questionnaire with 21 items covering the five categories of EMP teaching through robot-supported classes. To identify students\u2019 Basic Technology (BTC) levels, the second part of the questionnaire was assigned to students\u2019 self-assessment. The questionnaire was face and content validated by five TEFL and subject-area experts. It was tested for the reliability using Cronbach\u2019s Alpha (r=0.8). A chi-square over the degree of freedom value, =2.34 showed a good fit.\nStep II: Treatment and assessment: This study was conducted in 16 sessions. To conduct the flipped classroom in addition to the 90-minute-a-week online sessions, 90-minute-a-week sessions for practicing robot-supported activities were added.\nThe materials were taken from English for the Students of Nutrition [41 ###reference_b41###], English for the Students of Pharmacy [42 ###reference_b42###], English for the Students of Nursing [43 ###reference_b43###], English Texts for the Students of Library [44 ###reference_b44###]. The materials were changed into computer-readable passages for online sessions.\nRegarding the self-generated activities, a mini-corpus containing 240 subject-area passages was developed. Students selected materials from the mini-corpus. Consequently, the prototypes of the activities were developed to be practiced by the students in the subsequent sessions. Fifty-six Commercially-Of-The-Shelves were selected for Safir robot. Their motifs were in line with the topic of the lessons.\nStep III: Assessment of students\u2019 performance in real-world workspace: Six weeks after the final session, students arrived to the workspaces to be assessed regarding their ability to use the materials.\nStep IV: Interview: As the assessment of students\u2019 performance in real-world work-spaces was completed, interview with the students with the lowest and highest scores from each group was conducted in Persian. The interviews were transcribed and analyzed by the researchers."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III Results",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A Students\u2019 responses to the questionnaire",
|
| 39 |
+
"text": "Analyzing the students\u2019 responses revealed that most of them were dissatisfied with the features of common English classes. They complained that in Higher Education Institutions (HEIs), students learn how to practice textbook materials and exercises but did not get any coursework in how to tap into the materials effectively in the actual world. Mart\u00edn et al. [45 ###reference_b45###] state that these exercises underspecify the real-world features of materials. A great majority of the students held positive or fairly positive attitudes towards the extended learning of reading English. Playful practicing of reading could help students reevaluate the information in the materials [46 ###reference_b46###]. Accordingly, a substantial minority voted for reliance only on textbook exercises.\nWhen they were asked if they prefer activities to be related to their daily lives, they all with one accord said that what they liked best was doing well as workforce. When it comes to integrating robot across subjects to boost student English learning, a majority of the students were of the opinion that simulation of real-world through robots can foster wider outreach among students. Additionally, they opined that reading classes need to be presented in authentic contexts. A similar population believed that these activities better cater to students\u2019 needs. They highlighted the need for functioning adequately in both academia and workspaces. They said that robots help to foster more accurate mental representations of the materials. Through simulating the workspaces, students can be well disposed to give their attention to main points in the subject-matter area [47 ###reference_b47###]. However, meanwhile, they vetoed the idea that robot-supported activities can facilitate learning of different EMP skills to the same extent. The results were in line with the Smith\u2019s assumption in [48 ###reference_b48###] that different platforms \u201dimpart different skills to students and certain skills are more attractive to some employers than others\u201d (p. 282).\nMost of the students asserted that students need to be valued members of the in-structional-learning setting. They preferred to become involved rather than observe the workspaces. Boyne et. al [49 ###reference_b49###] suggest ways of es-tablishing engaging contexts by fostering \u201da range of more active, experiential, stu-dent-centered approaches to learning, especially in conjunction with the workplace, would be likely to produce the desired enterprise outcomes\u201d (p. 6). This active role gives students suffrage to choose activities consistent with their needs. From the respondents\u2019 per-spectives, practicing robot-supported reading in interdisciplinary milieu presages qualifications of comprehension. Under such circumstances, students can collaborate with each other as well, thus, they will come up with answers on their own. The pro-posal for coteaching in robot-supported teaching milieus was held in tight embrace of the respondents. According to [50 ###reference_b50###], as interest in engaging students continues to grow, ease of use, as the prominent feature of robots, can play a big part in helping students become more involved in their academic and future lives."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Analyzing the students\u2019 reading comprehension and performance",
|
| 45 |
+
"text": "For data analysis, Linear Mixed-Effect Model with random intercept and random slope was used. As shown in Table II ###reference_###, teaching materials through Commercially-Of-The-Shelf robot-supported flipped classes did little to encourage students to stand on their own feet (MCOTS.IN.E = 14.6 & MCOTS.COLL.E = 16.41). The rate of progress was the highest when the self-generated activities were practiced in interdisciplinary (M16E = 19.75) vs. monodisciplinary circles (M16E =17.58). The frequency of incomplete activities in online classrooms and lack of success in the real-world scenes could be indication of the lack of students\u2019 ability who practiced through Commercially-Of-The-Shelf vs. self-generated activities (MRE.COTS = 14.6, MRE.SG. = 19). Of course, the rate of progress was different in the groups as far as the teacher role was concerned to the extent that the students who were taught by English teachers achieved greater progress (ME.COTS = 16.41MSUB.COTS = 16.31; ME.SG = 19.75MSUB.SG = 17.75) and workspace score than their counter-parts who were taught by the subject-area teacher (MEREAL = 14.43MSUBJREAL = 14.2; MEREAL = 19MSUBJREAL= 17.75).\n###figure_4### As to inferential analysis of the data, baseline scores were considered as covari-ate; thus, the participants\u2019 scores were adjusted regarding their different baseline scores.\nTable of Hypothesis Testing for Within-Subject and Between-Subject effects on students\u2019 reading comprehension (Table III ###reference_###) shows the probable effects of within-subject and between-subject parameters on students\u2019 comprehension and performance.\n###figure_5### Analysis of the data shows the significant effect of reading proficiency (RP) on students\u2019 EMP reading in the first session (F = 6944.401, Sig. = 0.001). The effect of using flipped classes for teaching EMP reading comprehension on progress rate was significant (F = 861.191, Sig. = 0.001). t\u22c6set shows that students\u2019 progress was significantly different in the sets (F = 47.600, Sig. = 0.001). t\u22c6discipline shows that students\u2019 progress was significantly different in different disciplines (F = 8.221, Sig.= 0.001). Similarly, t\u22c6disciplinary discloses that students\u2019 progress was significantly different in monodisciplinary and interdisciplinary groups (F = 7.535, Sig. = 0.006). But, t\u22c6pair shows that students\u2019 movement in interdisciplinary circles with different HIT and TEFL members did not give rise to a significant differences in their EMP comprehension (F = 1.088, Sig. = 0.368). t\u22c6attrobot indicates that students\u2019 attitudes towards robot-supported activities did not lead to significant difference in their progress (F = 4.795, Sig. = 0.029). Also, students\u2019 BTC did not bring about significant differences in their progress (F = 2.547, Sig. = 0.111). t\u22c6att.flipped classes, however, indicates that the students\u2019 attitudes towards robot produced significant differences in their EMP reading (F = 42.457, Sig. = 0.001). t\u22c6set\u22c6attactive role shows there is an interaction effect between practicing self-generated activities and attitude towards playing active role in these modules on progress of students (F = 81.363, Sig. = 0.001). t\u22c6teacher\u22c6discipline reveals the interaction effect between teacher and students\u2019 discipline on their EMP comprehension (F = 7.156, Sig. = 0.000); however, this was not the case as far as the interaction effect of teacher and robot types on students\u2019 comprehension was concerned (F = 1.053, Sig. = 0.305).\nTo compare the participants\u2019 scores in workspaces, the emphasis was put on comparing longevity in learning. For that reason, the students\u2019 workspace scores were adjusted lest the effect of final session is present; put simply, the effect of teaching and longevity was separated. This way, the students\u2019 scores in the last session were co-variated and their workspace scores were adjusted; thus, the adjusted scores, void of teaching, practicing, and learning effect, were taken into account. According to the table of the Analysis of variances (Table IV ###reference_###) and the table of Differences in the Progress in Table V ###reference_###, the participants\u2019 workspace scores were predictable from their final session scores (viz., the effect of teaching, practicing, & learning). Although the participants\u2019 discipline (F = 1.446, Sig. = .218) did not lead into significant differences in terms of longevity, set (F= 78.174, Sig. = 0.000), practicing manner (F = 18.232, Sig. = .000), teacher type (F = 31.492, Sig. = .000), and disciplinary circles (F = 19.412, Sig. = .000) resulted in significant differences in longevity. In tandem, Partial Eta Squared of .153 revealed that activity exerted the most profound effect on longevity.\n###figure_6### ###figure_7###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-C Students\u2019 responses to the interview",
|
| 51 |
+
"text": "Retrospection of the students\u2019 answers to the prompts revealed that though there were varying opinions, a set of features was evolved which were collated to the final list:\nFlipped classes make it possible to enjoy the advantages of two modes of presentation and practice. The students\u2019 interpretation was that when they read EMP mate-rials from different sources their comprehension was boosted. The scenes of robot-supported activities provided repeated exposure to real language use. This helped students gain stronger educational experiences. The students\u2019 statements indicated that robot-supported activities were useful for encouraging greater participation along with practicing reading comprehension.\nMore interesting, perceived usefulness was more influential in the accounts of the students from self-generated groups. As said by these students, each student has specific needs and m-robot-supported activities were of help in detecting students\u2019 problems in comprehension. They knew the ability to develop personalized activities as the maximum benefit of flipped classes with self-generated activities. On their words, they could gain lots of ideas from the materials and employ them in real-world arenas. Similarly, they had ideas to make things better and self-generated project allowed them to do it. Students thought their interaction for developing activities as dialogic effort for comprehension, though they said they support each other in meeting the expectations. This way, practicing via robots helped students build future-ready learning experiences.\nBesides, students who practiced self-generated robots became more risk-taking as stated by these students. From their view, the burden of teaching shifts to certain extent to students; thus, it was a win-win situation for both teachers and students. They were of the opinion that robot-supported reading had the potential to be conducted fully online in outdoors. Nevertheless, they saw the online classroom as a precondition for the successful development of reading comprehension.\nInterestingly, majority of the students, who practiced through their robots, stated that they had a good handle on components of academia and workspaces. As said by the students, when they were endowed with the right for preparing activities, scenes were set to read between the lines. Conversely, the participants from the Commercially-Of-The-Shelf group blamed the lack of taking full advantage of r-supported activities to their passive presence in the instructional-learning contexts. Even so, mention should be made that, students with low level of English reading proficiency highlighted the great novelty of self-generated activities and complained that developing activities could be considered mystifying to them. They altogether emphasized that before jumping into using robots, teachers needed adequate training. By the way, they highly endorsed the coteaching in online classrooms. Along these lines students opined that affordability and productivity of robots work round to using robots in EMP reading comprehension. The result of analyzing the responses dis-closed that many of the factors that influence the successful adoption of flipped classes are similar to the factors identified in productive academia and workspaces. The classifications constituting these elements were joined to figure a blueprint to function as a summary for flipped classes."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "IV Discussion",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "IV-A Robot-supported flipped classes and constructionism approach",
|
| 63 |
+
"text": "The constructivism approach describes how student active participation influences student comprehension and performance, which, in tandem, smooths the path for student proactive role. A finding disclosed from this study was that the robot-supported flipped classes in postsecondary education appeared successful for both academic and professional purposes. These classes demonstrated gains in EMP reading comprehension. Such finding was on the side of both students\u2019 attitude and perception, that is, by embracing the possibility of developing variegated activities for students that account for their needs, flipped classes held great promise for helping students in learning EMP reading materials. With respect to students\u2019 ability in tapping into their EMP reading comprehension in workspaces and their outperformance as a result of practicing EMP materials via AR-supported flipped classes it could be claimed that flipped classes have the potential to conceptualize authentic settings, namely Situated Learning Theory. Along with Brown\u2019s saying in [51 ###reference_b51###], it was revealed that in giving students reading activities out of context we set them a difficult task. In effect, with the exquisite features of robot-supported flipped classes, students could access the materials pertaining to their academics. This, as indicated before, seems to confirm findings of studies in ESP reading comprehension to this effect that flipped classes through combining customary classrooms and EdTech furnish contexts with reinforced comprehension [1 ###reference_b1###], [52 ###reference_b52###]. However, it falls in contrast to the findings of studies which have pointed to the temporary nature of learning that resulted from robot-based education [53 ###reference_b53###]. The finding also echoed the implications of the previous studies that graduate students\u2019 failure in workspaces could be attributed to inefficiency of postsecondary teaching programs [54 ###reference_b54###]. Equally, the result of this study implied that teaching related to students\u2019 actual subject-area activities, namely student engagement in the activities they are confronted with their academic and professional lives facilitate student proactive role."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-B The way(s) of conducting robot-supported flipped classes",
|
| 69 |
+
"text": "The result of the study appeared to support the viewpoint that it would be na\u00efve to acknowledge that the mere use of robots results in students\u2019 better comprehension and outperformance. The result was not only dependent on robots but also to innovative course of action for employing robots for teaching. The result went in for the dictum that comprehension of materials remains unchanged for students irrespective of way of practicing [55 ###reference_b55###]. This implies that the optimal integration of ro-bot-supported activities into flipped classes occurred when these activities were students\u2019 self-generated type [5 ###reference_b5###].\nAgain, finding reveled that self-generated AR activities could help students immerse in rich details. And, students could employ sufficient information by incorporating cues. These activities proved the efficiency of visualization as one of the reading comprehension strategies. As a matter of fact, students\u2019 active presence in the process of teaching EMP materials could ease the cognitive load. Interestingly, the difficulty level of the passages selected by the students in developing self-generated setting increased parallel with the increase in the difficulty level of the passages in online classes, that is, students\u2019 active role in developing activities heightened their awareness of the materials and contexts. This way, self-generated activities created association with other works, namely interdisciplinary learning conditions. And this is the right place that windows were opened for entrepreneurship along with education (viz. edupreneurs)."
|
| 70 |
+
}
|
| 71 |
+
],
|
| 72 |
+
"appendix": [],
|
| 73 |
+
"tables": {
|
| 74 |
+
"1": {
|
| 75 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S2.T1.3.2\" style=\"font-size:90%;\">the correlation between the hidden variables (Fornel and Locker Analysis)</span></figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"781\" id=\"S2.T1.g1\" src=\"x1.jpg\" width=\"1196\"/>\n</figure>",
|
| 76 |
+
"capture": "TABLE I: the correlation between the hidden variables (Fornel and Locker Analysis)"
|
| 77 |
+
},
|
| 78 |
+
"2": {
|
| 79 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T2.2.1.1\" style=\"font-size:90%;\">TABLE II</span>: </span><span class=\"ltx_text\" id=\"S3.T2.3.2\" style=\"font-size:90%;\">Comparison of the Participants\u2019 Progress.</span></figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"417\" id=\"S3.T2.g1\" src=\"x2.jpg\" width=\"1196\"/>\n</figure>",
|
| 80 |
+
"capture": "TABLE II: Comparison of the Participants\u2019 Progress."
|
| 81 |
+
},
|
| 82 |
+
"3": {
|
| 83 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T3.2.1.1\" style=\"font-size:90%;\">TABLE III</span>: </span><span class=\"ltx_text\" id=\"S3.T3.3.2\" style=\"font-size:90%;\">Hypothesis Testing for Within-subject and Between-subject Effects on the Students\u2019 Com-prehension and Performance.</span></figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"793\" id=\"S3.T3.g1\" src=\"x3.jpg\" width=\"1196\"/>\n</figure>",
|
| 84 |
+
"capture": "TABLE III: Hypothesis Testing for Within-subject and Between-subject Effects on the Students\u2019 Com-prehension and Performance."
|
| 85 |
+
},
|
| 86 |
+
"4": {
|
| 87 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T4.2.1.1\" style=\"font-size:90%;\">TABLE IV</span>: </span><span class=\"ltx_text\" id=\"S3.T4.3.2\" style=\"font-size:90%;\">Analysis of Variances for the Participants\u2019 Workspace Performance.</span></figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_landscape\" height=\"395\" id=\"S3.T4.g1\" src=\"x4.jpg\" width=\"1196\"/>\n</figure>",
|
| 88 |
+
"capture": "TABLE IV: Analysis of Variances for the Participants\u2019 Workspace Performance."
|
| 89 |
+
},
|
| 90 |
+
"5": {
|
| 91 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T5.2.1.1\" style=\"font-size:90%;\">TABLE V</span>: </span><span class=\"ltx_text\" id=\"S3.T5.3.2\" style=\"font-size:90%;\">Difference in the Progress.</span></figcaption><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_centering ltx_img_square\" height=\"856\" id=\"S3.T5.g1\" src=\"x5.jpg\" width=\"837\"/>\n</figure>",
|
| 92 |
+
"capture": "TABLE V: Difference in the Progress."
|
| 93 |
+
}
|
| 94 |
+
},
|
| 95 |
+
"image_paths": {
|
| 96 |
+
"1": {
|
| 97 |
+
"figure_path": "2208.07442v2_figure_1.png",
|
| 98 |
+
"caption": "Figure 1: The Safir Robot.",
|
| 99 |
+
"url": "http://arxiv.org/html/2208.07442v2/extracted/6076386/safir.jpg"
|
| 100 |
+
},
|
| 101 |
+
"2": {
|
| 102 |
+
"figure_path": "2208.07442v2_figure_2.png",
|
| 103 |
+
"caption": "Figure 2: The make-up of circles.",
|
| 104 |
+
"url": "http://arxiv.org/html/2208.07442v2/extracted/6076386/circles.jpg"
|
| 105 |
+
}
|
| 106 |
+
},
|
| 107 |
+
"validation": true,
|
| 108 |
+
"references": [],
|
| 109 |
+
"url": "http://arxiv.org/html/2208.07442v2"
|
| 110 |
+
}
|
20241217/2302.00667v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2302.13292v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2303.08554v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2308.10062v7.json
ADDED
|
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency",
|
| 3 |
+
"abstract": "In the realm of computer systems, efficient utilisation of the CPU (Central Processing Unit) has always been a paramount concern. Researchers and engineers have long sought ways to optimise process execution on the CPU, leading to the emergence of CPU scheduling as a field of study. In this research, we have analysed the single offline batch processing and investigated other sophisticated paradigms such as time-sharing operating systems and wildly used algorithms, and their shortcomings. Our work is directed towards two fundamental aspects of scheduling: efficiency and fairness. We propose a novel algorithm for batch processing that operates on a preemptive model, dynamically assigning priorities based on a robust ratio, employing a dynamic time slice, and utilising periodic sorting to achieve fairness. By engineering this responsive and fair model, the proposed algorithm strikes a delicate balance between efficiency and fairness, providing an optimised solution for batch scheduling while ensuring system responsiveness.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In the realm of computer systems, maximising the efficient utilisation of the CPU (Central Processing Unit) stands as a fundamental objective. From the earliest days of computing, relentless efforts by researchers and engineers have been devoted to devising sophisticated techniques aimed at optimising process execution on the CPU. The discipline of CPU scheduling has emerged as a response to this perpetual quest for enhanced efficiency and resource allocation.\nThe roots of CPU scheduling [40 ###reference_b40###], [17 ###reference_b17###] can be traced back to the early days of computing when batch processing systems were prevalent. In these systems, a set of fixed-size jobs was submitted as a batch to the computer, and the CPU had to execute them one after another. This approach suffered from inefficiencies as the CPU lacked responsiveness or processes waiting for the completion of others. Subsequently, to overcome certain limitations, scheduling algorithms were devised to manage the execution order of jobs, aiming to minimise idle time and maximise CPU utilisation.\nEfficient scheduling not only impacts system performance but also has significant economic implications. In today\u2019s digital era, where computational power is a valuable resource, optimising CPU scheduling can lead to substantial cost savings. By ensuring high throughput of the CPUs, organisations can complete computational tasks faster, reduce energy consumption, and ultimately save resources.\nIn addition to efficiency, the consideration of fairness in job selection is a crucial aspect of schedulers that is frequently under-addressed [3 ###reference_b3###]. However, fairness is often viewed as a subjective metric lacking a universally agreed-upon definition in various task-scheduling contexts [47 ###reference_b47###].\nIn this study, we return to the fundamental scheduling paradigm of batch processing. Here, we have a single, fixed-size queue of jobs, each with predetermined burst times. A single machine processes the queue, with preemption allowed between jobs. Despite the growing demand for advanced scheduling algorithms in recent times, this very foundational paradigm has comparatively been overlooked. As a result, the potential usage of the paradigm has also not been utilised across various domains [35 ###reference_b35###]. We acknowledge that to enhance the applicability of this paradigm in today\u2019s computing environment, we need to design algorithms that are not only succinctly efficient but also sufficiently fair [33 ###reference_b33###, 37 ###reference_b37###, 49 ###reference_b49###]. As most of the commonly used algorithms are primarily for time-sharing and multiprocessing systems [40 ###reference_b40###], the question of what feasible means to measure fairness and efficiency for this setting has not been effectively addressed. Our work addresses this question by first analysing these measures and then analyzing the classical algorithms commonly used in time-sharing and multi-programming systems [40 ###reference_b40###] with respect to these measures to understand how they perform in terms of both efficient and fair distribution of job selections across a diverse set of job clusters. Lastly, through our efforts, we propose a novel algorithm that achieves both efficiency and fairness and outperforms traditional algorithms in striking a balance between these two factors. Our algorithm strives to enhance system productivity, reduce costs, and improve the overall user experience in various batch or batch-like computing environments. Specifically, our contributions are summarized as:\nWe introduced a novel algorithm concerning fairness and efficiency in uni-processing batch environment;\nWe theoretically analysed and experimentally demonstrated its efficiency under a diverse and robust experimental setup and compared with other wildly used policies in the light of balancing efficiency and fairness;\nWe furthermore come up with an optimised version of our algorithm to reduce additional computational overhead.\nAcross the manuscript, we\u2019ve interchangeably used the terms \u2018Job\u2019, \u2018Task\u2019, and \u2018Process\u2019."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "The primary goal of CPU scheduling is to allocate the CPU among multiple processes fairly and optimally. This involves making crucial decisions about process order and execution, taking into account various criteria and objectives. Key criteria for CPU scheduling include turnaround time, waiting time, response time etc. Balancing these criteria is essential to ensure fairness, efficiency, and responsiveness in the overall system performance. To achieve these goals, various CPU scheduling algorithms have been developed, each with its own advantages and trade-offs. The following section provides a comprehensive glossary of important terms and a detailed analysis of popular CPU scheduling algorithms, shedding light on their functioning and impact on system behaviour. By understanding the broader context and criteria of CPU scheduling, we can delve into the intricacies of different algorithms and evaluate their effectiveness in meeting the diverse needs of modern computing environments."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Glossary",
|
| 21 |
+
"text": "Arrival Time: The time at which a process arrives and becomes ready for execution.\nBurst Time (bursttime): The amount of estimated CPU time required by a process to complete its execution.\nRemaining Time (remainingtime): The amount of time still needed by a process to complete its execution. For example, if a process has a burst time of 10 seconds and has already been executed for 5 seconds, the remaining time would be (10-5)= 5 seconds.\nWaiting Time (waitingtime): The total time a process spends waiting in the ready queue before getting the CPU. For instance, if a process arrives at time 0, and there are two processes already in the queue, it would wait until the preceding process(es) leaves the CPU for its execution.\nTurnaround Time (turnaroundtime): The total time taken by a process to complete its execution, including both waiting time and execution time. This is also known as completion time which we\u2019ve used interchangeably in the manuscript. Moreover, by definition turnaroundtime = bursttime + waitingtime [40 ###reference_b40###]. So, minimising turnaroundtime is strategically equivalent to minimising waitingtime.\nResponse Time (responsetime): Response time, in the context of computing, signifies the period it takes for the CPU to react to a request initiated by a process. It essentially measures the interval between the arrival of a process and its initial execution.\nPreemption: The act of interrupting the execution of a process before it completes its execution. Preemption allows for the allocation of CPU time to other processes with higher priority or in preemptive scheduling algorithms. Number of times it takes place has been referred as preemptioncount in the paper.\nContext Switching: The process of saving and restoring the state of a process so that it can be resumed from the same point when it is scheduled again. Context switching occurs during preemption or when a new process is selected for execution.\nQuantum/Time Slice (here, timeQuantum): The fixed amount of time allocated to a set of processes while scheduling (prevalent in Round Robin).\nPriority: A value assigned to a process to determine its relative importance or priority in scheduling. Higher-priority processes are given precedence over lower-priority processes for CPU allocation.\nOver time, researchers have proposed enhancements and improvements to classical algorithms (see Appendix A Appendix A ###reference_###). For instance, variations of Round Robin, such as variations of Weighted Round Robin (WRR) [21 ###reference_b21###], [44 ###reference_b44###] and Multilevel Queue Scheduling, have been introduced to address the limitations of strict time slicing. Additionally, policies like Priority Scheduling and Multilevel Feedback Queue Scheduling have been developed to incorporate process priority and dynamically adjust scheduling parameters.\nRecent advancements in CPU scheduling have focused on adaptive and intelligent algorithms, leveraging machine learning and optimisation techniques [31 ###reference_b31###], [18 ###reference_b18###], [43 ###reference_b43###]. These algorithms aim to dynamically adapt to workload patterns, predict burst times, and improve system performance. Examples include Reinforcement Learning-based scheduling [24 ###reference_b24###], [11 ###reference_b11###] algorithms, fuzzy-logic based heuristics [5 ###reference_b5###], [7 ###reference_b7###], [4 ###reference_b4###] and Genetic Algorithm-based approaches [13 ###reference_b13###]. While these advanced frameworks for complex operating systems take huge additional computations, other strategies, like backfilling algorithms [34 ###reference_b34###], fair-share schedulers [22 ###reference_b22###], gang scheduling [29 ###reference_b29###], deadline-based schedulers [42 ###reference_b42###] etc are complex to implement, cost computational overhead and do not comprehend the inter-dependencies of different attributes in an easy manner and not always suitable and/or applicable to the batch processing. The wildly used algorithms in scheduling has been discussed in Appendix A (Appendix A ###reference_###).\nA batch processing system can be visualised as a fixed-size array of jobs. In this research, we have considered the case where the jobs have their predetermined bursttimes. This is the most fundamental paradigm in scheduling theory which along with its several variants serves as the cornerstone for some of the most important industrial processes as well as in various domains in data-engineering [49 ###reference_b49###], big data and high-performance computing [35 ###reference_b35###] etc.\nNow, our \u2018Quest\u2019 is to analyse this very simple batch paradigm with the intention to investigate, primarily the behaviour of the two metrics researchers are mostly interested in: Efficiency and Fairness.\n\nI. Efficiency\nBatch processing is heavily used in industrial setups [12 ###reference_b12###, 45 ###reference_b45###] where we are mostly concerned with the completion of all the jobs. In our work, there is no explicit deadline, penalty or priority of jobs. This is why, our primary means to quantify the efficiency is to measure the average waiting and turnaround time for the whole batch. Researchers, in the past, had also considered the number of tardy jobs [8 ###reference_b8###], maximum lateness [19 ###reference_b19###], and sometimes makespans [36 ###reference_b36###] in a set of derivatives of the same paradigms which is not suitable and/or applicable for our current objectives of this work.\nII. Fairness\nEfficiency in queuing systems generally has standard metric(s) and definitions to measure the performance of policies. Fairness, unlike efficiency, doesn\u2019t have a universally accepted metric to deal with [47 ###reference_b47###]. Experts had mostly related fairness in terms of selection and providing proportionate timeQuantum to jobs [3 ###reference_b3###, 47 ###reference_b47###] in the past. However, to the best of our knowledge, there is no universal quantifiable metric of fairness for single-batch processing in a uni-processing system and the aforesaid improvements are also not primarily intended for fairness in our setting.\nIn our study, our primary focus is on comparing algorithms based on their efficiency, i.e. in terms of minimizing average turnaround and waiting times as stated above. However, efficiency is not the sole consideration, as practitioners also emphasize the system\u2019s average response time [15 ###reference_b15###], which is the duration from a job\u2019s arrival to its first response. In our context, where all jobs are present in the batch and no new jobs are added during processing, depending on the selection of the jobs at each iteration responsetime provides an overview of how fairly jobs have been selected. This is why, we quantify fairness by calculating the average responsetime for the entire batch. For our algorithm, at each iteration, we calculate response time for all eligible jobs and then compute the mean of these responsetimes when no jobs are left, generating the average response time for the whole batch.\nTo summarise, we evaluate algorithms using three key parameters: average waitingtime, average turnaroundtime, and average responsetime. The average waitingtime and average turnaroundtime are indicators of efficiency, while the average responsetime reflects the fairness of the algorithms. A competitive algorithm not only excels in efficiency but also maintains a maintains a satisfactory level of fairness. Lower turnaround and waiting times indicate higher efficiency, while a lower average responsetime indicates fairer job selection."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Proposed Work",
|
| 27 |
+
"text": "While empirically going through examples of process clusters, a very thoughtful observation for us was to discover the fact that any cluster of processes is never discrete throughout execution. In other words, the current attributes of a particular cluster of processes will inevitably change after some interval and we would be dealing with different types of sub-problems (here cluster of processes) in each cycle. In a multilevel queue or multilevel feedback queue for example, if we have an interval of \u2018t\u2019 units and a set of algorithms \u2018S\u2019, we may halt after each \u2018t\u2019 unit of time, analyse the system, and can choose the \u2018most suitable\u2019 algorithm from \u2018S\u2019. For a tiebreaker amongst comparable algorithms, we can even take their space and time complexity or other parameters into account. Multilevel queues and multilevel feedback queues (MLQs/MLFQs) are computationally intensive [40 ###reference_b40###] and can be overkill when striking a balance between efficiency and fairness, this very observation was the main inspiration to come up with a metric that can address the crucial aspects and intricacies of the process attributes after a certain period. Our approach is primarily experimental and substantially empirical in nature. After experimenting with several ways of measuring the changes in various parameters and their interdependencies, we have finalised the ratio in the algorithm below based on a few aspects elaborated in 3.1 ###reference_### section. While this may not be as sophisticated as MLQ / MLFQ, the interdependencies of attributes, if addressed effectively, lead to a near-optimal solution. With the potential to address the limitations of traditional batch processing, our approach opens doors to more conclusive approaches in this domain.\nThe name of our algorithm is FairBatch.\n###figure_1###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Analysis",
|
| 33 |
+
"text": "The proposed algorithm starts with calculating the \u2018fairnessRatio\u2019 and sorting the processes based on the calculation. The aspects we have considered while designing the ratio are elaborated in following points.\nI. Balanced Selection of Jobs\nIn equation 1 ###reference_### the ratio tracks both how much a process has been waiting before execution, and how much progress has been made, before scheduling. In the numerator, (bursttime - remainingtime) is the measure of the amount of bursttime which has been consumed till a particular iteration, or in other words, this is the progress3.1 ###reference_### of a process till the current iteration. The waitingtime is a measure of the total amount of time one process has waited till the time it gets under execution.\nAs the processes, if not under execution, have to wait for the whole time quanta (timeQuantum) allotted for a particular cycle before moving to next iteration, the update at each cycle would be:\nW is the set of processes that didn\u2019t execute in a particular iteration, p is a process, p.waitingtime is the waitingtime of p.\nThe proposed algorithm takes a unique approach by considering both shorter and longer processes (or part of processes) to ensure a balanced and efficient execution. By using the fairnessRatio and sorting mechanisms, the algorithm strategically selects a set of jobs that allows shorter jobs to make more progress in each iteration, while also reducing the waiting time for longer processes. As a result, the algorithm achieves a well-rounded mix of processes, promoting fair distribution of CPU resources and significantly improving the response time of longer processes. The approach\u2019s dynamic nature ensures that the scheduler efficiently handles various process types, creating an optimal balance between fairness and efficiency.\nTo address a specific edge case in the algorithm, we initialise the waitingtime with one. This situation arises when a large process arrives at the front of the queue in the very first iteration, followed by significantly smaller processes. In the first cycle, the remainingtime and bursttime of processes is equal. If both the (bursttime - remainingtime) and waitingtime in eq.1 ###reference_### are being initialised with 0, it will act as simple fcfs scheduling which makes subsequent processes waiting for execution from the very first cycle, negatively impacting the overall efficiency of the whole batch. Alternatively, we can add 1 to waitingtime if we initialised it to 0. Considering other attributes not changing, waitingtime time will be increasing the LHS of eq.1 ###reference_### proportionately with respect to its magnitude.\nII. Progress in Fairness ratio\nProgress is another pivotal dimension that affects fairness. Processes that have made substantial progress in their execution should be prioritised to avoid unnecessary interruptions and context switches. By incorporating the progress made by each process, as calculated by the difference between bursttime and remainingtime in the numerator of eq1, the fairnessRatio acknowledges the importance of honouring the advancements of processes. The more the progress, the lesser the remainingtime. Under a fixed timeQunatum, shorter processes are given more opportunities to make progress due to their shorter bursttime. This is achieved by inversely weighting the bursttime and remainingtime in the ratio calculation. Considering other attributes not changing, progress will be increasing the LHS of eq.1 proportionately with respect to its magnitude.\nIII. Limiting Preemption using Fairness ratio\nThe fairnessRatio takes into account the preemptioncount (initialised with 1 to remove unnecessary Zero-Division-Error or explicitly 1 can be added if initialised as 0 ), which considers the cost of context switching and interrupts. By inversely weighting the fairnessRatio based on the preemptioncount, the algorithm promotes efficiency by reducing unnecessary preemption, minimising CPU overhead, and enhancing responsiveness.\nFairBatch works with at least 1 process\nAssume that there are no processes in the queue (, : set of processes in the queue).\nIn this case, the while loop condition is not satisfied.\nAccording to Algorithm 1, if the while loop condition is not satisfied, the algorithm won\u2019t run.\nTherefore, if , the algorithm won\u2019t run.\nThis proves that the presence of at least one process in the queue is necessary for the algorithm to run. Hence, the proposition is proven.\n\u220e\nThere is no preemption within processes in a particular iteration in FairBatch\nConsider the FairBatch algorithm (Algorithm 1) with a fixed time quantum denoted as . The algorithm operates as follows:\nThe algorithm runs until the time quantum is exhausted.\nDuring this time, it sequentially processes the tasks from the front of the sorted queue.\nIf is less than the remaining time of the first task in the queue, only a proportional fraction of that task\u2019s execution is performed in the current iteration.\nAfter the complete execution of a task or its proportional fraction, if there is time remaining in , the algorithm proceeds to the next task in the queue.\nThis process continues until is exhausted without reordering the queue or allowing other processes to interrupt a currently executing process.\nTo prove the absence of preemption within processes in a particular iteration, we will employ a proof by contradiction.\nAssume, for the sake of contradiction, that there is preemption within processes in a particular iteration in FairBatch. Let be the fixed time quantum for this iteration. We define the following terms:\nrepresents the -th process in the queue.\nrepresents the remaining time of process at the beginning of the iteration.\nrepresents the execution time allocated to process within the iteration.\nrepresents the queue of processes at the beginning of the iteration.\nNow, consider the scenario in which preemption occurs within the iteration:\nProcess begins execution with remaining time and is interrupted after time units (where ).\nThe queue is reevaluated, and other processes may be given a chance to execute.\nAfter is preempted, it is possible that another process with starts executing.\nTherefore, this scenario of preemption within an iteration contradicts the fundamental principles of the FairBatch algorithm. Hence, we conclude that there is no preemption within processes in a particular iteration in FairBatch.\n\u220e\nBetween 2 consecutive iterations, there is at max 1 preemption.\nLet represent the current iteration, and the next iteration. We will analyze the occurrence of preemptions.\n1. If the -th iteration completes with all processes being completely executed, the next iteration begins with a different set of processes. In this scenario, there is no possibility of preemption.\n2. Now, consider the case where not all processes are completely executed in the -th iteration.\na. Let be the last process executed in the -th iteration with the highest fairness ratio. This process may still have the highest fairness ratio in the -th iteration.\nb. There are two cases to consider:\n- Case 1: If is chosen for execution at the beginning of the -th iteration.\n- In this case, there is no preemption in a continuous frame of reference assuming no delays between the iterations.\n- Case 2: If a different process is chosen at the very front of the queue in the -th iteration.\n- This results in a preemption.\nTherefore, in either case, there is either no preemption or at most one preemption between two consecutive iterations.\nThis completes the proof, demonstrating that between two consecutive iterations in FairBatch, there can be at most one preemption.\n\n\u220e"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.2",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Further Analysis of the Parameters:",
|
| 39 |
+
"text": "In our investigation, we will analyze when our setting achieves the minimum and maximum waiting and response time.\nMinimal Average Response Time:\nUnder our current setting, out of n jobs in a batch, only 1 can be under execution at a time and rest \u2018n-1\u2019 job wait till there is any context switching. The greedy way is allocating each task the minimum possible execution time, regardless of its bursttime and moving on to the next processes. For example, if there are \u2018n\u2019 number of tasks and if each task is granted \u2018k\u2019 unit of time, the first process experiences no initial delay, the second incurs a k-unit wait due to the first task, and so on, with the n-th process waiting for k unit for each preceding task. In this scenario, the average response time is calculated as:\nremark: Usually under general settings, k = 1 [40 ###reference_b40###].\nMinimal Average Waiting Time:\nIt is well-known that the SJF/SRTF (preemptive version of SJF) Rule minimises the average waiting time. [9 ###reference_b9###, 41 ###reference_b41###]\nIn our setting, SJF and SRTF are equivalent in terms of working and performance\nSRTF switches the context iff at any instance it discovers some lower burst(s) compared to the process currently under execution. In our setting, we have all the processes in the batch available from the very beginning and there is no addition of other jobs at any time. SRTF starts with the least burst from the batch and after each unit of time, the remaining time of the least burst is trivially the least (as there is guaranteed progress using proposition 3.1). This is why there is no preemption as there cannot be any other job at any instance which would have a lesser burst than the currently executing process for all jobs in the batch. This is exactly the way SJF works.\n\u220e\nremark: For the sake of consistency, we have used SRTF and SJF interchangeably throughout the manuscript.\nMaximum Average Response Time for the Batch:\nIn our setting, the batch will experience the maximum response time when all the waiting processes have to wait the most. Exactly \u2018n-1\u2019 jobs from a batch with \u2018n\u2019 jobs will wait whenever exactly 1 job is under execution. If every time the currently executing job itself is of the highest burst amongst the processes and there is no preemption until the entire completion of the executing job, the rest of the jobs wait for the longest possible time. Longest Process First(LPT) chooses the longest job to execute without preemption. As a result, in our environment for a single batch with a single processing unit, it trivially maximises the responsetime. It also maximises the average completion time [9 ###reference_b9###].\nMaximum Average Waiting Time for the Batch:\nLPT is the inverse of the SJF policy and maximises mean completion time [9 ###reference_b9###]. However, since the conventional approach prioritizes minimization of completion time, especially in our comparison with SRTF/SJF algorithms, we opt to evaluate the preemptive variant, LRTF. This choice aims to assess its potential for enhanced response times due to its aggressive preemptive nature.\nremark: As our main goal is to revitalise the single batch environment, we\u2019ve given the results for the extreme cases above but in realistic scenarios, to the best of our knowledge, there is no single algorithm that simultaneously achieves the best of 2 world (lowest possible average waiting and response time). Any algorithm including FairBatch, designed for this setting, would always fall within the extreme ranges demonstrated. As the concentration for this research is on considering both waiting, turnaround and response time together across the distribution, the trivial case for lowest both waiting and response time is achieved when simply the batch has 1 job using proposition 3.1. However algorithmic behaviour changes and shows different alignments with regards to the underlying distribution of burst time of the jobs.\nThis is why, we\u2019ve studied and investigated exhaustively how our algorithms behave under our robust experimental setup in Section 4. Now, there can be many types of distributions, for our research, we have considered those that are commonly observed in real-world scenarios the most and can simulate the problems that we had mentioned earlier such as convoy and starvation effects etc."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.3",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Importance of a Suitable Time Quantum",
|
| 45 |
+
"text": "In a queue of processes, even when executing just one process for the minimal feasible time, (say \u201ct\u201d) for the sake of achieving utmost fairness, we must re-calculate the fairness ratio of every process within the batch after each \u201ct\u201d unit of time. Now \u201ct\u201d mathematically can be arbitrarily small or large. If \u201ct\u201d is exceedingly small, process progress becomes exceedingly marginal with excessively unnecessary context switches, while an arbitrarily large \u201ct\u201d renders the algorithm\u2019s behaviour akin to First-Come-First-Serve (FCFS) for the majority of the time \u2013 both of which contradicts the philosophy of FairBatch. Moreover in realistic scenarios, the distribution on job\u2019s bursttimes may follow several distribution or might be completely random in nature. For example, in a positively skewed distribution, the mean is greater than the median and visa versa for negatively skewed counterpart. Regardless of distributions or random distributions, mean and median are always prominent factors to get an overview of the central tendency. This is why we have diligently explored diverse statistical formulations across current literatures [28 ###reference_b28###, 39 ###reference_b39###, 20 ###reference_b20###, 48 ###reference_b48###, 6 ###reference_b6###, 27 ###reference_b27###], and have finalised, inspired by the empirical work presented in [38 ###reference_b38###]."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.4",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Optimisation of the Algorithm",
|
| 51 |
+
"text": "We can reduce the computational overhead produced by sorting the whole batch at each cycle by employing an efficient framework for grouping up and running the processes sequentially.\n- Consider the following for a particular cycle :\nS is the set of available processes with remaining time ,\nR is the set of fairnessRatios of processes.\ntimeQuantum = .\nis the remaining time of process in .\nThis \u2018Select\u2019 algorithm outputs a set of appropriate proportions of job(s) based on fairnessRatios and timeQuantum. The \u2018Runner\u2019 algorithm takes the set of processes \u2018s\u2019 as a input from \u2018Select\u2019. It sorts the processes in \u2018s\u2019 based on their fairnessRatio. Run the processes sequentially and update their attributes. After the cycle ends, the attributes of the process not were under execution is updated and the scheduler runs the subsequent cycles till there is process left in the batch.\nThe \u2018Select\u2019 algorithm starts by taking as many processes with values greater than the median as possible, staying within the weight constraint. If there is any leftover capacity, it takes processes with values equal to the median. If further capacity remains, the algorithm recursively considers combinations of processes to maximize the total value while respecting the timeQuantum limit. The median calculation, utilized in both FairBatch and the \u2018Select\u2019 procedure, can be computed in linear time. In \u2018Select\u2019, each recursive call requires linear time, excluding the time spent on potential recursive calls it may make. Since there is only one recursive call, it pertains to a problem size at most half of the original. As a result, the running time can be expressed by the following recurrence relation:\nT(n) T(n/2)+ (n) therefore, T(n) = O(n), using master\u2019s theorem.\nIf the \u2018Select\u2019 algorithm returns \u2018k\u2019 jobs to be scheduled, \u2018Runner\u2019 will take additional k.log k asymptotic time complete the scheduling due to sorting.\nSo overall the scheduler takes: O(n+ k.log k) per cycle where k n."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Experimental Setup",
|
| 57 |
+
"text": "FCFS is the most commonly used algorithm for batch jobs but we\u2019ve also taken preemptive and advanced algorithms like SRTF, LRTF, RR, and CFS along with FCFS for comparison. Along with\ntheoretically well-grounded policies such as SRTF, LRTF, RR and FCFS, we have considered adopting the implementation CFS algorithm111https://elixir.bootlin.com/linux/v5.19.9/source/kernel/sched/fair.c#L7429 ###reference_ource/kernel/sched/fair.c#L7429### for a uni-processing system as per our setting222following a similar pythonic adaptation used:https://github.com/SanchithHegde/completely-fair-scheduler ###reference_y-fair-scheduler### . We consider the default \u2018nice\u2019 value of individual jobs in the batch as 0333https://man7.org/linux/man-pages/man7/sched.7.html ###reference_d.7.html###. By comparing with these traditional and contemporary scheduling approaches enriches the evaluation, blending theoretical robustness with practical relevance, and ensuring a comprehensive assessment.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Description of the Test Cases",
|
| 63 |
+
"text": "Unlike recent works [28 ###reference_b28###, 39 ###reference_b39###, 20 ###reference_b20###, 48 ###reference_b48###, 6 ###reference_b6###, 27 ###reference_b27###] and others where primarily evaluations of algorithms are based on mere set of unsystematic, tiny examples, we aim to provide a robust evaluation by considering diverse set of clusters of jobs and a large number of test cases. To the best of our knowledge, there is no publicly available, sufficiently large, domain-agnostic, benchmark dataset of bursttimes that would be suitable for comparing under our setting and methodologies. It was a significant motivation for us to first create a diverse and sufficiently large dataset of bursttimes. We have made it publicly available with the aim of enabling the research community to utilize it in their own work. Eventually, we evaluated all algorithms on this dataset to assess their performance. For this evaluation, first of all we have carefully selected several probability distributions to generate the test cases: we have considered Normal, Exponential, Geometric, Negative Binomial, Poisson, Uniform, Pareto, Gamma, and Standard Cauchy (Lorentz) distribution. The Normal distribution enabled us to assess algorithms\u2019 ability to handle data clustering around a mean value, while the Exponential distribution shed light on performance in scenarios with exponential decay-like patterns. The Geometric distribution offered insights into algorithms\u2019 response to decreasing probabilities of longer burst times. Simulating long-tailed variations, the Negative Binomial distribution, being a general case of the geometric distribution, allowed us to scrutinize algorithm behavior in such scenarios. By employing the Poisson distribution, we observe algorithm behavior under specific occurrence patterns of processes. The Uniform distribution challenged algorithms with burst times exhibiting a uniform and evenly distributed pattern. The Pareto distribution, on the other hand, is characterized by a heavy tail, making it suitable for modeling phenomena where a small number of events have a disproportionately large impact. The Gamma distribution, with its shape and rate parameters, offers versatility in modeling various phenomena, including wait times and service times. The Standard Cauchy distribution, also known as the Lorentz distribution, represents a distribution with heavy tails and no defined mean or variance, providing insights into scenarios with extreme outliers. Together, these distributions provide a comprehensive overview of diverse bursttime patterns [32 ###reference_b32###], offering a holistic assessment of algorithm performance across various real-world scenarios, which has not been covered in previous studies. While the distributions we\u2019ve covered cover many scenarios, sometimes load on the CPUs can be even more varied and complex. To ensure our evaluations are robust, we\u2019ve also considered bimodal, trimodal, and multimodal distributions. In a continues frame of reference (the CPU is processing arbitrarily large number of jobs),these distributions exhibit more irregular and diverse bursttime patterns. In the first phase of analysis we study the first nine distributions and in the second phase we investigate into these three special distributions.\nFor each distribution, we first produce 100 test cases, with each test case consisting of 100 integers bursttimes.\nAfter this initial phase, we fine tune the entire dataset. While distributions are primarily generated444https://numpy.org/doc/1.16/reference/routines.random.html ###reference_nes.random.html### by randomly sampling values, these samples may not align with the realistic characteristics of job burst times, where each job\u2019s burst time must fall within a feasible range. Simply relying on theoretical distributions could lead to unrealistic or unfeasible burst times, skewing the evaluation of scheduling algorithms. Our fine tuning are 2 phase:\nAdhering to Practical Constraints: While we initially generated instances from several distributions to emulate diverse loads on the CPU, we recognize that CPUs do not process loads exactly the way they arrive in the system (in batch form here) following a precise distribution. CPUs cannot process arbitrarily small portions of a job; they allocate a minimum execution time for all jobs by default. Similarly, CPUs cannot allocate arbitrarily large processes. If a process exceeds the CPU\u2019s allocation, the remainder of the process is stored in secondary memory. Using several page ranking algorithms [10 ###reference_b10###], the CPU executes these remaining portions in subsequent iterations or batches [40 ###reference_b40###]. To account for this, we ensure that our dataset reflects this practical constraint, making it more applicable not only for rigorous evaluations but also for practical scenarios: After generating 100 samples for each distribution, we \u2018clip\u2019 each sample to fall within the range of 1 to 500 for the bursttimes of each job. This process reflects resource allocation constraints inherent to CPUs. Values less than 1 are set to 1, representing the minimum resource allocation that a CPU can assign to a job. Similarly, values greater than 500 are set to 500, representing the maximum resource allocation that a CPU can assign to a job. This is how, from a set of randomly sampled distributions, we finalized these job clusters, adhering to the realistic constraints of CPU processing.\nNormalisation of the Clusters: After the clusters are ready, we normalise all finalised samples. We divide each bursttime by the sum of the bursts present in the sample. After that we multiply each burst by 25000. We normalised each sample such that the sum of bursts in each sample across clusters are (almost) equal to 25000. By scaling the data, we eliminate the influence of magnitude disparities, allowing us to focus solely on the algorithmic performance with respect to bursttime distribution patterns between any pair of samples.\nFurthermore, for a fine-grained comparison, we\u2019ve used box plots for each metric to analysis across various distributions for each algorithm. We\u2019ve referred to our algorithm as \u201cFairBatch\u201d here and have taken the ceil of mean of the bursttimes as the timeQuantum in RR as it remains almost equal for all the lists under examination.In diagram. We describe the legends used in experiments in table 1 ###reference_###."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Discussion",
|
| 69 |
+
"text": "Each algorithm is tested on 100 test cases present in each cluster, we represent their aggregated results in the form of boxplots. Before analysis, we are presenting a concise introduction of boxplots for readers at appendix B Appendix B ###reference_###.\nFirst Phase of the Analysis: In this phase, we investigate the algorithms against\n\u2018Nor cluster\u2019, \u2018Expo cluster\u2019, \u2018Geo cluster\u2019, \u2018N.Bio cluster\u2019, \u2018Psn cluster\u2019, \u2018Ufm cluster\u2019, \u2018Gamma cluster\u2019, \u2018Cauchy cluster\u2019, \u2018Pareto cluster\u2019. These comparisons are primarily made to simulate homogeneous yet diverse patterns of CPU loads. For our average turnaround and waiting time, the benchmarking algorithm is SRTF as mentioned. We compare the algorithms based on their comparative performance with respect to SRTF as demonstrated across 2 ###reference_###,3 ###reference_###,4 ###reference_###. We ignore the outliers and relatively compare the performances based on the IQR, median, and range.\nIf we discretely compare the average turnaround, waiting and response time for FairBatch (2 in the figures):\nOut of 9 cases, FairBatch has achieved the least average turnaroundtime in 7 cases after SRTF. In the case of \u2018Nor Cluster\u2019, FairBatch has achieved the third least average turnaroundtime after FCFS (1 in figures) and SRTF. In the case of \u2018Psn Cluster\u2019, it comes after SRTF, FCFS, and SRTF respectively in 3 ###reference_###,\nOut of 9 cases, FairBatch has achieved the least average waitingtime in 7 cases after SRTF. In the case of \u2018Nor Cluster\u2019 in 3 ###reference_###, FairBatch has achieved the third least average waitingtime after FCFS (1 in figures) and SRTF. In the case of \u2018Psn Cluster\u2019, it comes after SRTF, FCFS, and SRTF respectively in 3 ###reference_###.\nOut of 9 cases, FairBatch has achieved the least average responsetime in 7 cases. In the case of \u2018Nor Cluster\u2019 and \u2018Psn Cluster\u2019 in 3 ###reference_###, FairBatch has achieved the second least average responsetime after LRTF (4 in figures). The only algorithm that ever exceeds FairBatch in responsetime is LRTF only in these 2 cases.\nInvestigating the performance in the case of \u2018Nor Cluster\u2019\nHere, the underlying distribution is the normal distribution, where approximately 68% of the data falls within one standard deviation of the mean, about 95% falls within two standard deviations, and around 99.7% falls within three standard deviations of the mean [32 ###reference_b32###]. So, in this cluster, the majority of the jobs\u2019 burst times are centred around a single value (the mean). In other words, most jobs are very comparable in size (bursttimes). Now, FCFS, being a non-preemptive algorithm, completes jobs without any context switch. As a result, here in terms of turnaround and waiting time, FCFS excels. However, due to FCFS\u2019s non-preemptive nature, it cannot respond to other jobs waiting in the queue before one job is completely executed, leading to the issue of the convoy effect [40 ###reference_b40###]. FairBatch, in this case, preemptively switches contexts based on the fairness ratio, which prioritizes waitingtime for jobs as well as progress in currently under-executing jobs. This is evident when comparing the average responsetime of FairBatch and FCFS. FairBatch not only competitively prioritizes efficiency but also gives weightage to fairness in job selection.\nOn the other hand, LRTF achieves the best responsetime in this cluster. LRTF, being a preemptive algorithm, at every instance chooses the longest available job, performs bare minimum execution, and moves on to the next largest job. As most of the jobs are similar in size, this procedure results in excessive preemption, providing a \u201cresponse\u201d to each job in the least time. While FCFS was completely focused on efficiently executing all the jobs one after another, causing severe convey in the system, LRTF aggressively responded to almost every available job with marginal progress, which drastically reduced its efficiency 3 ###reference_###. FairBatch, on the other hand, strikes an excellent balance between efficiency and fairness that is unparalleled even in SRTF.\nObserving the trade-off Arose in the case of \u2018Psn cluster\u2019\nThe Poisson distribution is primarily used to model the likelihood of an event occurring a certain number of times within a specific period. We have selected this discrete distribution to examine the occurrence of repetitive loads in CPUs [26 ###reference_b26###]. Since batch processing is a CPU-bound process, repetitive loads are frequently observed in the system. Consequently, this cluster is heavily influenced by frequently occurring comparable jobs. FCFS, as a non-preemptive algorithm, aggressively executes repetitive jobs, causing other jobs to wait disproportionately and eventually starve. The next algorithm that outperforms FairBatch is CFS. As CFS (SCHED_BATCH) is a widely used finely tuned scheduling policy for large batches, operating on the principles of virtual runtime and nice values, it is well-suited for managing commonly observed repetitive loads on systems. However, due to its design555https://elixir.bootlin.com/linux/v5.19.9/source/kernel/sched/fair.c##L7429 ###reference_ource/kernel/sched/fair.c##L7429### lacking a suitable preemption-supportive mechanism and heuristics, it does not perform well in terms of response time.\nOn the other hand, LRTF aggressively preempts and responds to frequently repetitive loads. Similar to the \u2018Nor Cluster\u2019, while it achieves excellent responsiveness, it shows marginal efficiency. In contrast, FairBatch does not monopolize resources for repetitive jobs but evenly distributes them among all jobs while maintaining competitive efficiency. This demonstrates that the FairBatch algorithm exhibits strong adaptability and resilience in handling the randomness present in different data distributions.\nSecond Phase of Analysis: In the previous phase, we have observed the unparalleled superiority in terms of both efficiency and fairness of FairBatch across 7 out of 9 cases. We have also analysed when there is a harsh trade-off between efficiency and fairness, how all algorithm falls apart in consistently maintaining the trade-off except FairBatch. In order to delve into more on these scenarios comprising of critical trade-offs, after the first phase, here we primarily investigate bimodal, trimodal and multimodal clusters.\nIf we discretely compare the average turnaround, waiting and response time for FairBatch in 5 ###reference_###:\nFairBatch achieves the least average turnaroundtime after FCFS and SRTF in all three cases.\nFairBatch achieves the least average waitingtime after FCFS and SRTF in all three cases.\nFairBatch achieves the least average responsetime in all three cases.\nIn regular batch processing, heterogeneous loads are not commonly observed unless the batch is sufficiently large. These three distributions were created by concatenating more than one heterogeneous subpopulation of normal distributions. As a result, the central tendency of the measurements is inherently more complex than the previous clusters. These clusters are generated to adversarially test the algorithms to observe how they perform when there are multiple distinct subpopulations creating heterogeneous loads on the CPU. Depending on the sequence of the subpopulations, the arrival sequence of the jobs changes, potentially resulting in a change in the average turnaround, waiting, and response times.\nFCFS has a distinct advantage in these cases: it minimizes the maximum turnaroundtime across jobs for any finite arrival sequence of jobs [16 ###reference_b16###]. Irrespective of the sequence of these subpopulations, FCFS achieves the least turnaround and waiting times across all three clusters after SRTF in our experiment. While it achieves superior efficiency due to its theoretical guarantees, it is evident from Figure 5 ###reference_### that it does not perform well in terms of responsetime in any of these clusters. It suffers from severe starvation and lags in fairness in job selection. FairBatch in all these cases performs excellently in handling the efficiency and fairness trade-off as evident from the results in figure 5 ###reference_###. Not only FCFS, but also CFS and LRTF along with others do not perform well when it comes to selecting jobs in a fairer way while maintaining competitive efficiency unlike FairBatch.\nStability of the performance: another advantage of FairBatch\nFairBatch offers another distinct advantage over algorithms like Round Robin and FCFS due to its stability and predictability. While the sequence of execution and overall results of these algorithms heavily depend on the sequence in which the job arrived, FairBatch eliminates this variability by arranging and executing jobs based on fairnessratio and timeQuantum. This characteristic makes the algorithm a more reliable and consistent scheduling algorithm, ensuring stability and predictability in its performance.\nAfter a comprehensive comparison of all algorithms across various distributions as demonstrated in fig:2 ###reference_###, 3 ###reference_###, 4 ###reference_###, 5 ###reference_###, it is evident that the FairBatch performs exceptionally competitively with respect to the benchmark (SRTF) for all distributions showcasing its superiority through the delicate balance it strikes between drastically reducing waitingtime, ensuring responsiveness, and eliminating starvation. With stability, predictability, and unmatched efficiency, our algorithm bridges the gap between suitable responsiveness and required efficiency through a fair and holistic approach, as demonstrated extensively."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Conclusion",
|
| 75 |
+
"text": "Our research paper addresses a significant gap in the scheduling of batch processing systems by leveraging the favourable aspects of advanced algorithms. While classical algorithms like SRTF, RR, and LRTF, CFS etc have been predominantly utilised in interactive systems, their potential benefits have been mostly overlooked in the context of batch processing. Our proposed algorithm aims to bridge this gap by harnessing the positive attributes of these advanced algorithms without inheriting their limitations in a unique and wholesome way. By focusing on fairness, efficiency, and system performance, our algorithm provides a comprehensive solution that outperforms traditional approaches in multiple parameters. Through careful consideration of the strengths and weaknesses of existing algorithms, we have developed a novel approach that optimises scheduling in batch systems, effectively addressing the existing void and achieving superior results."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [],
|
| 79 |
+
"tables": {
|
| 80 |
+
"1": {
|
| 81 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.1\">Abbreviation</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.2\">Description</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.1.1.3\">Used in</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.1\">Nor Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.2\">Collection of arrays of jobs following the underlying Normal distribution</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.2.2.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.3.3.1\">Expo Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.3.3.2\">Collection of arrays of jobs following the underlying Exponential distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.3.3.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.4.4.1\">Geo Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.4.4.2\">Collection of arrays of jobs following the underlying Geometric distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.4.4.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.5.5.1\">N.Bio Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.5.5.2\">Collection of arrays of jobs following the underlying Negative Binomial distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.5.5.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.6.6.1\">Psn Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.6.6.2\">Collection of arrays of jobs following the underlying Poisson distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.6.6.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.7.7.1\">Ufm Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.7.7.2\">Collection of arrays of jobs following the underlying Uniform distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.7.7.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.8.8.1\">Bimodal Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.8.8.2\">Collection of arrays of jobs following the underlying Bimodal distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.8.8.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.9.9.1\">Trimodal Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.9.9.2\">Collection of arrays of jobs following the underlying Trimodal distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.9.9.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.10.10.1\">Multimodal Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.10.10.2\">Collection of arrays of jobs following the underlying Multimodal distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.10.10.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.11.11.1\">Gamma Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.11.11.2\">Collection of arrays of jobs following the underlying Gamma distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.11.11.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.12.12.1\">Cauchy Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.12.12.2\">Collection of arrays of jobs following the underlying Cauchy (Lorentz) distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.12.12.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.13.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.13.13.1\">Pareto Cluster</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.13.13.2\">Collection of arrays of jobs following the underlying Pareto distribution</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.13.13.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.14.14.1\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.14.14.2\">FCFS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.2.14.14.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.15.15.1\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.15.15.2\">FairBatch</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.15.15.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.16.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.16.16.1\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.16.16.2\">SRTF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.16.16.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.17.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.17.17.1\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.17.17.2\">LRTF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.17.17.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.18.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.18.18.1\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.18.18.2\">RR</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.18.18.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.19.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.2.19.19.1\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.2.19.19.2\">CFS (SCHED_BATCH)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.2.19.19.3\">Fig:<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F2\" title=\"Figure 2 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F3\" title=\"Figure 3 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F4\" title=\"Figure 4 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10062v7#S4.F5\" title=\"Figure 5 \u2023 4 Experimental Setup \u2023 Revitalising the Single Batch Environment: A \u2018Quest\u2019 to Achieve Fairness and Efficiency\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.3.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.4.2\" style=\"font-size:90%;\">Abbreviations and descriptions of clusters and algorithms</span></figcaption>\n</figure>",
|
| 82 |
+
"capture": "Table 1: Abbreviations and descriptions of clusters and algorithms"
|
| 83 |
+
}
|
| 84 |
+
},
|
| 85 |
+
"image_paths": {
|
| 86 |
+
"1": {
|
| 87 |
+
"figure_path": "2308.10062v7_figure_1.png",
|
| 88 |
+
"caption": "Figure 1: Flowchart of the algorithm",
|
| 89 |
+
"url": "http://arxiv.org/html/2308.10062v7/x1.png"
|
| 90 |
+
},
|
| 91 |
+
"2(a)": {
|
| 92 |
+
"figure_path": "2308.10062v7_figure_2(a).png",
|
| 93 |
+
"caption": "Figure 2: Exponential, Geometric & Negative Binomial clusters",
|
| 94 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Expo.png"
|
| 95 |
+
},
|
| 96 |
+
"2(b)": {
|
| 97 |
+
"figure_path": "2308.10062v7_figure_2(b).png",
|
| 98 |
+
"caption": "Figure 2: Exponential, Geometric & Negative Binomial clusters",
|
| 99 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Geo.png"
|
| 100 |
+
},
|
| 101 |
+
"2(c)": {
|
| 102 |
+
"figure_path": "2308.10062v7_figure_2(c).png",
|
| 103 |
+
"caption": "Figure 2: Exponential, Geometric & Negative Binomial clusters",
|
| 104 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Nbin.png"
|
| 105 |
+
},
|
| 106 |
+
"3(a)": {
|
| 107 |
+
"figure_path": "2308.10062v7_figure_3(a).png",
|
| 108 |
+
"caption": "Figure 3: Normal, Poisson, & Uniform clusters",
|
| 109 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Normal.png"
|
| 110 |
+
},
|
| 111 |
+
"3(b)": {
|
| 112 |
+
"figure_path": "2308.10062v7_figure_3(b).png",
|
| 113 |
+
"caption": "Figure 3: Normal, Poisson, & Uniform clusters",
|
| 114 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Psn.png"
|
| 115 |
+
},
|
| 116 |
+
"3(c)": {
|
| 117 |
+
"figure_path": "2308.10062v7_figure_3(c).png",
|
| 118 |
+
"caption": "Figure 3: Normal, Poisson, & Uniform clusters",
|
| 119 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Uniform.png"
|
| 120 |
+
},
|
| 121 |
+
"4(a)": {
|
| 122 |
+
"figure_path": "2308.10062v7_figure_4(a).png",
|
| 123 |
+
"caption": "Figure 4: Pareto, Gamma, & Standard Cauchy (Lorentz) clusters",
|
| 124 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/pareto.png"
|
| 125 |
+
},
|
| 126 |
+
"4(b)": {
|
| 127 |
+
"figure_path": "2308.10062v7_figure_4(b).png",
|
| 128 |
+
"caption": "Figure 4: Pareto, Gamma, & Standard Cauchy (Lorentz) clusters",
|
| 129 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/gamma.png"
|
| 130 |
+
},
|
| 131 |
+
"4(c)": {
|
| 132 |
+
"figure_path": "2308.10062v7_figure_4(c).png",
|
| 133 |
+
"caption": "Figure 4: Pareto, Gamma, & Standard Cauchy (Lorentz) clusters",
|
| 134 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/standard_cauchy.png"
|
| 135 |
+
},
|
| 136 |
+
"5(a)": {
|
| 137 |
+
"figure_path": "2308.10062v7_figure_5(a).png",
|
| 138 |
+
"caption": "Figure 5: Bimodal, Trimodal, & Tultimodal clusters",
|
| 139 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Bimodal.png"
|
| 140 |
+
},
|
| 141 |
+
"5(b)": {
|
| 142 |
+
"figure_path": "2308.10062v7_figure_5(b).png",
|
| 143 |
+
"caption": "Figure 5: Bimodal, Trimodal, & Tultimodal clusters",
|
| 144 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Trimodal.png"
|
| 145 |
+
},
|
| 146 |
+
"5(c)": {
|
| 147 |
+
"figure_path": "2308.10062v7_figure_5(c).png",
|
| 148 |
+
"caption": "Figure 5: Bimodal, Trimodal, & Tultimodal clusters",
|
| 149 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/res/Multimodal.png"
|
| 150 |
+
},
|
| 151 |
+
"6": {
|
| 152 |
+
"figure_path": "2308.10062v7_figure_6.png",
|
| 153 |
+
"caption": "Figure 6: Boxplot",
|
| 154 |
+
"url": "http://arxiv.org/html/2308.10062v7/extracted/6076492/boxplot-description.png"
|
| 155 |
+
}
|
| 156 |
+
},
|
| 157 |
+
"validation": true,
|
| 158 |
+
"references": [],
|
| 159 |
+
"url": "http://arxiv.org/html/2308.10062v7"
|
| 160 |
+
}
|
20241217/2309.10426v4.json
ADDED
|
@@ -0,0 +1,202 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Multi-Object Graph Affordance Network: Goal-Oriented Planning through Learned Compound Object Affordances",
|
| 3 |
+
"abstract": "Learning object affordances is an effective tool in the field of robot learning. While the data-driven models investigate affordances of single or paired objects, there is a gap in the exploration of affordances of compound objects composed of an arbitrary number of objects. We propose the Multi-Object Graph Affordance Network which models complex compound object affordances by learning the outcomes of robot actions that facilitate interactions between an object and a compound. Given the depth images of the objects, the object features are extracted via convolution operations and encoded in the nodes of graph neural networks. Graph convolution operations are used to encode the state of the compounds, which are used as input to decoders to predict the outcome of the object-compound interactions. After learning the compound object affordances, given different tasks, the learned outcome predictors are used to plan sequences of stack actions that involve stacking objects on top of each other, inserting smaller objects into larger containers and passing through ring-like objects through poles. We showed that our system successfully modeled the affordances of compound objects that include concave and convex objects, in both simulated and real-world environments. We benchmarked our system with a baseline model to highlight its advantages.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The affordances concept, introduced by J.J. Gibson to refer to the action possibilities provided by the environment [1 ###reference_b1###], has been significantly influential in robotics research [2 ###reference_b2###, 3 ###reference_b3###]. The developmental aspects of affordances have been widely adopted in robot learning research [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###]. While previous works have examined the affordances of single or paired simple object interactions, the affordances of compound objects composed of an arbitrary number of objects with concave shapes and varying sizes have not been sufficiently studied [8 ###reference_b8###].\nConsider an infant trying to build a tower with its toys. Because of the different shapes and sizes of the objects, each toy would afford different actions, and with each action, various effects would be generated. The affordances of the objects may change according to their relations with the other objects in the environment, i.e., while an empty cup affords insertability, it might loose this affordance if one or more objects are put inside the cup or a large box is stacked on the cup.\nHowever, if a small object is inserted in a big cup or several large rings are stacked on the cup, the cup would remain insertable. Predicting the affordance of the compound object is not straightforward, as the affordance of a compound object not only depends on the affordances of the included objects but is also determined based on in which order these objects are placed (e.g. via releasing the objects on the compound) and the relative positions of all objects. In order to address the challenge of learning compound object affordances, we propose to represent the objects in the compound as a graph, as the graph representation preserves spatial relations between objects, and can be used to propagate information along the chain of objects in the compound, enabling effective reasoning for the complete structure.\n###figure_1### Graph Neural Networks (GNNs) [9 ###reference_b9###] are effective for learning meaningful representations of structures and their relations. Consequently, they have gained extensive adoption in action recognition problems [10 ###reference_b10###], natural language processing [11 ###reference_b11###], navigation problems to learn relations between pedestrians, objects, and robots [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###], as well as reasoning about relations between multi-object systems [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. In these studies, the representation capacity of GNNs for spatial relations involving an unlimited number of inputs is exploited, whereas other commonly used feed-forward networks lack this property.\nAffordances refer to the relations between objects, actions and effects [4 ###reference_b4###], we aim to learn to predict effects given objects and actions. In our study, we propose the Multi-Object Graph Affordance Network (MOGAN), which learns affordances of compound objects, i.e. learns predicting effects of actions applied on objects and/or object compounds. The prediction is done using features obtained from graph representations utilizing GNNs. In this study, we focus on actions that facilitate interactions between objects. For this purpose, we used actions that pick up objects and place them on top of other objects or object compounds. The effects are encoded as the spatial displacements between the released objects and the objects in the compound structure. With concave objects, visibility can be altered when stacking, as the line of sight may intersect another object. In contrast, with convex objects, there is no need to measure visibility. Consequently, when dealing with concave objects, traditional effect representation based solely on their center points is insufficient to reason about visibility effects. Therefore, more complex effect representations are required to accurately convey information about the visibility of concave objects. As a result, a suitable novel effect representation is used.\nWe designed six different tasks using an inventory of convex and concave objects of varying sizes, including poles, cups and rings of different sizes, boxes, and balls. The learned affordances correspond to forward predictors, and therefore can be used for goal-oriented action selection and planning. We first presented the effect prediction results of compound object interactions in the Pybullet simulation environment. Subsequently, we discussed the success rates of plans generated through model predictions. Finally, we demonstrated the realization of the generated plans in the simulation environment, resulting in the construction of unseen structures composed of available objects. The results of our model were compared with those of the baseline model which corresponds to DeepSym model [19 ###reference_b19###]. While Deepsym is the state-of-the-art model for learning action-object-effect relational categories, it is limited to paired object interactions. We modified the Deepsym model to handle multi-object representations to benchmark it against our proposed model. We also demonstrated the applicability of our system by executing tasks with the UR10 manipulator in a real-world setting.\nIn summary, this paper introduces the MOGAN model, a novel approach for learning compound object affordances. Our contributions can be summarized as follows:\nProposal of Multi-Object Graph Affordance Network: We introduce a novel model, MOGAN, which encodes the affordances of compounds composed of varying number of objects by representing them as a graph structure without the need for supervision from experts or labeled affordances. Our model learns these affordances through the observed effects of robot interactions.\nIntroduction of a Novel Effect Encoding Method: We represent the effects of robot interactions with a particular encoding that takes into account 3D spatial relations. Widely used effects, such as displacement of the centers of the objects, are insufficient to explain the semantic behavior of concave objects.\nDemonstration of Applicability: We showed the applicability of our system by successfully accomplishing various tasks in both the Pybullet simulation environment and the real world using the UR10 manipulator."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Affordances",
|
| 21 |
+
"text": "The study of affordances [4 ###reference_b4###, 3 ###reference_b3###], [20 ###reference_b20###, 21 ###reference_b21###] has attracted significant attention in recent years, with embodied AI studies utilizing affordances to evaluate language model generations. For instance, Ahn et al. [22 ###reference_b22###] proposed SayCan, where they combined the skill affordances of a robot with Language Models (LLM) to ground the instructions to the environment. Additionally, Ahn et al. [23 ###reference_b23###] introduced AutoRT, where affordance filtering is utilized to align the tasks generated by vision-language models (VLMs) with the robot\u2019s capabilities and safety regulations. However, these studies primarily focused on extracting skill affordances by combining language model outputs with predefined instructions and rules, such as \u201cdo not lift heavy objects.\u201d In contrast, the concept in our study revolves around exploring the affordances of objects through robot interactions and observed effects, which are influenced by also the weights of the objects.\nSome approaches learned visual affordances to understand applicable actions through neural networks employed in computer vision. [24 ###reference_b24###] Qian et al. [25 ###reference_b25###] combined large-scale vision language models with an image encoder and an affordance decoder network to predict an affordance map based on the queried action. Birr et al. [26 ###reference_b26###] extracted affordances of detected objects through queries using a predefined prompt list of affordances in ChatGPT, an AI tool developed by OpenAI.\nDo et al. [27 ###reference_b27###] detected affordances of objects in images along with their classes, with affordances being labeled at the pixel level. Cuttano et al. [28 ###reference_b28###] proposed a model based on CLIP [29 ###reference_b29###] that grounds task-agnostic affordances of texts from an open vocabulary onto images. Depth values of object images are also utilized to define affordance relations. Toumpa and Cohn [30 ###reference_b30###] defined affordance relations concerning the concaveness of objects. While they measure concaveness by inspecting depth values based on [31 ###reference_b31###], we use deep autoencoders to learn features of objects that are not limited to concavity.\nVarious robotics research benefited affordances to enhance precision in grasping, picking, and placing operations [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###]. Hart et al. [35 ###reference_b35###] designed a ROS package enabling the operators to specify grasp poses. Corona et al. [36 ###reference_b36###] proposed a comprehensive network called GanHand, where the hand shape and pose are predicted through the reconstruction of the object alongside the predicted grasp type. [37 ###reference_b37###] studies mechanisms that produce hierarchical structuring of affordance learning tasks of different levels of complexity.\nMandikal and Grauman [38 ###reference_b38###] learned grasping policies utilizing 3D thermal affordance maps. Zeng et al. [39 ###reference_b39###] benefitted from labeled affordance maps for improved grasping. Learning contact information as affordances, as studied in [40 ###reference_b40###] and [41 ###reference_b41###], is another way to tackle the existing problems. Cheng et al. [42 ###reference_b42###] also learned the contact points of two objects for picking, grasping, and regrasping operations. Lin et al. [43 ###reference_b43###] learned pixel-wise pick-and-place affordances by generating 3D imaginary scenes from 2D images using an annotated dataset. Borja-Diaz et al. [44 ###reference_b44###] designed a self-supervised affordance learning model that labels gripper open and close points while the robot is controlled through human teleoperation. Mees et al. [45 ###reference_b45###] extended this work, grounding large language models to robotic applications. While these studies supervise their models to learn affordances using expert annotations, contact points, and gripper signals, we observe the effects of the manipulator\u2019s actions to explore the affordances. Cruz et al. [46 ###reference_b46###] used affordances in an interactive reinforcement learning setup in order to speed up the learning of skills from other agents.\nMultiple approaches have studied the exploration of affordances learning effects through interactions [47 ###reference_b47###, 48 ###reference_b48###]. Mar et al. [49 ###reference_b49###] explored affordance categories according to the effects of tool usage. They mapped the features extracted from the observations and affordance classes discovered by clustering the effects with the k-means algorithm [50 ###reference_b50###]. Antunes et al. [51 ###reference_b51###] defined affordances as the probability of effects given the object features, tool features, and action. With the formulation of goals as symbols, they achieved probabilistic planning. Saponaro et al. [52 ###reference_b52###] exploit the affordances learned from robot interactions to interpret and describe human actions in light of their own experience.\n[53 ###reference_b53###, 19 ###reference_b19###] also performed sub-symbolic and symbolic planning [54 ###reference_b54###], using affordances of only single or paired objects.\nWhile the affordances of objects with concave and convex shapes, such as mugs and spoons, are learned with the supervision of experts, exploring and discovering the affordances of complex structures generated by combining (inserting, passing through, stacking) of a sequence of such concave and convex objects through observed effects has not been studied to the best of our knowledge. We study the discovery of affordances of compounds, including concave shapes like rings, poles, and cups. In our study, we also adapt GNNs to exploit their representation capacity for compound object affordances.\n###figure_2###"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Graph Neural Networks",
|
| 27 |
+
"text": "Graph Neural Networks have been used in a wide range of domains to model systems composed of multiple parts such as the human body for human action understanding [55 ###reference_b55###], the human hand for gesture inference [56 ###reference_b56###], the electroencephalogram (EEG)-related measurements for emotion recognition [57 ###reference_b57###], and the images with multiple entities for inferring the relations between them [58 ###reference_b58###]. In robotics, on the other hand, the representation capability of GNNs for an unlimited number of objects and their relationships has enabled their widespread adoption [59 ###reference_b59###, 60 ###reference_b60###, 61 ###reference_b61###, 62 ###reference_b62###]. Lou et al. [63 ###reference_b63###] depicted densely clustered objects as graph structures and extracted the adjacency and occlusion relations. They then utilized GNNs to learn grasping poses for target objects, taking into account the spatial relations with other objects. Wilson and Hermans [64 ###reference_b64###] utilized GNNs in conjunction with CNNs to encode their multi-object environment for more accurate reward calculation during policy training. Lin et al. [65 ###reference_b65###] devised a graph structure in which objects and the goal positions for pick and place tasks are connected by edges. Subsequently, the learned GNN policy selects object and goal nodes to execute the steps of desired tasks. Huang et al. [66 ###reference_b66###] represented multi-object scenes as fully connected graph structures based on partial observations and learned the relations between nodes as logical symbols using GNN classifiers. In contrast, our study reasons relations between objects by learning observed effects of robot actions without defining logical symbols.\nGNNs are also commonly used for modeling dynamics of multi-object systems [67 ###reference_b67###]. Driess et al. [68 ###reference_b68###] employed GNNs to capture the dynamics between multiple objects for novel scene synthesis using Neural Radiance Fields (NeRF) [69 ###reference_b69###]. Tekden et al. [70 ###reference_b70###, 71 ###reference_b71###] introduced a learnable physics engine where objects are represented as graph structures, and the relations between them are classified at the edge level. With estimated relations between objects, future states are predicted based on the applied actions. However, the objects are simple-shaped, and their features are restricted to position and radius values. Additionally, only actions that push the objects in the horizontal plane are considered.\nOverall, in our study, compound objects are represented as graph structures, their features are learned utilizing GNNs, and the affordances are learned through effect predictions. Our system plans a sequence of actions (selecting an object to place it on the compound object) with a search algorithm utilizing the learned affordances."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III Method",
|
| 33 |
+
"text": "Our proposed method models the affordances of compound objects, which are composed of an arbitrary number of objects that are placed on top of each other. Given the compound object and a new object, it learns to predict the effects generated by placing the new object on top of the compound object.\nIn our framework, an affordance, which is denoted as , is defined as the relation between the compound object () that resides on the table, the object () to be placed on top of the compound object, and the effects () generated: . Given and , our system is expected to learn , and . For learning, at the start of each exploration cycle, the size of the object compound () is initialized as 0. Then, the robot randomly selects and picks up an object (), places it on top of the current object compound, and observes the effects () until either the new object falls down or the object compound collapses.\nIn the rest of this section, we first describe how compound and single objects ( and ) and effects () are represented, and the details of the learning algorithm. Finally we describe how the learned affordances can be used to make plans in order to achieve different goals."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A Single Object Representation",
|
| 39 |
+
"text": "The single objects are represented by features extracted from their depth images using an autoencoder. The encoder component of the autoencoder takes in a 32x32 normalized depth image and comprises three linear layers with neuron sizes of 256, 256, and 64, respectively. Empirically, a latent space size of 4 was found to be sufficient for representing the images of the object set used in this study. The decoder part of the autoencoder is not utilized in the MOGAN model because we only need the latent vector; therefore, it is not shown in Figure 2 ###reference_###. The decoder depicted in the figure represents the decoder component of the MOGAN model. The autoencoder is trained with the single depth images collected from both the simulation environment and the real world until convergence.\nIn our system, the encoder component of the autoencoder is used to extract latent representations from the single images. These latent representations are then used to construct graph representations for the compounds, allowing the MOGAN model to learn the effects from them. The maximum and minimum values of the depth images are appended to the latent representations to prevent the information loss caused by the normalization operation.Therefore, single objects and their features, such as size, shape, and concaveness, are represented by a learned feature vector of size 6."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Compound Object Representation",
|
| 45 |
+
"text": "The compound objects are composed of different objects placed on top of each other. In order to represent a compound object, both the features of the single objects inside the compound and the spatial relations between the objects are required to be used. For this, we utilize a graph-based structure. A graph, denoted as , is defined as a tuple of nodes and edges .\nEach node, , consists of the object features acquired through the autoencoder. and denote the size of the feature vector of a node and an edge, respectively, while indicates the number of nodes within the graph, with no specific limits on this count. A directed edge between two nodes is defined when objects are placed consecutively in the tower, with the direction going from one object to the one placed before it, providing a hint for the spatial relations between the objects in the compound. In Section V-C ###reference_###, we also designed an edge creation algorithm for non-linear compounds to preserve these spatial relations. All nodes form self-connections.\n###figure_3###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-C Effect Representation",
|
| 51 |
+
"text": "When an object is placed on the compound object, different types of effects, such as insertion in different ways, stacking, or toppling, are observed. Instead of categorizing each effect instance into a pre-defined effect category, we propose a generic continuous effect representation that captures the 3D spatial relations between the placed object and each object in the compound.\nIn other words, the effect represents the spatial outcome of placing the new object on the compound and is encoded as a combination: . describes the height differences between the top and bottom surfaces of each object pairs, considering their bounding boxes.\ncorrepsponds to the effect between the new object and the object in the object compound. and describe the maximum and minimum height values of an object derived from the bounding box of the object. is a sign function that assigns signs to the effect values. The function creates a vector starting from the top surface of the object to the top surface of the new object, using the center points of the surfaces. If the vector points toward the center of the object, the sign is considered negative. This comparison is performed for all the bounding box faces. encodes the lateral spatial differences between objects. The differences are calculated by sending imaginary rays through the new object, as shown in Fig. 3 ###reference_###. If the ray does not intersect with the interested object (outlined with green color), the relevant effect becomes 0. The signs of the differences are calculated with the sign function considering the points that the imaginary rays cut. The intersection points for each object are calculated using the function in PyBullet. This function provides the coordinates and object IDs where the ray intersects at each point. We use this function to find the red points indicated in Figure 3 ###reference_###.\nFinally, encodes whether the newly placed object falls down or the compound object collapses/topples when the new object is placed on top. The and functions get the x-y position and orientation of a given object and compare them with the base of the compound. They return the sum of the differences in these values. The thresholds and are used to determine whether the position and orientation values indicate a collapse.They are chosen empirically as 20 cm and 60 degrees, respectively."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-D Multi-Object Graph Affordance Network (MOGAN)",
|
| 57 |
+
"text": "The proposed MOGAN model, shown in Fig. 2 ###reference_###, outputs the effects () expected to be generated when a new object () is placed on the compound object (). As the compound object was formed by placing the objects one by one on top of each other, the depth images and the corresponding autoencoder features () were already collected and available for processing for a compound with a size of k. The autoencoder features of the new object to be placed on the compound object are also processed and is represented as . Our system, MOGAN, comprises the encoder part of the pretrained autoencoder, GCNConv layers, and a linear decoder. The depth image features are extracted by the encoder. A high-level graph representation is then constructed based on the current compound (), as explained in Section III-B ###reference_###, utilizing corresponding depth image features. Subsequently, two GCNConv [72 ###reference_b72###] layers process the graph representation of the compound to generate a latent representation. The mean and maximum values of these latent representations for all nodes are calculated and aggregated. The features of the newly added object (), the aggregated latent representation, and the latent representation of the queried object are concatenated. The decoder, consisting of three linear layers, takes this concatenated input to predict the effects between the queried object () and the new object () placed on top of the compound. In our system, the parameter size for the network is 46786, and Leaky ReLU is utilized as the activation function between the layers."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.5",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "III-E Planning and Tasks",
|
| 63 |
+
"text": "We aim to provide a variety of tasks to demonstrate the prediction capacity of the MOGAN for planning to achieve different goals. The goals include obtaining object compounds of specified heights, structures, and sub-structures. A tree search algorithm is realized to discover the optimal plan to achieve a specific goal. At each iterative step, the graph representation of the existing object compound is generated, and the object that will be placed on the tower is encoded. Three MOGAN networks predict the effects based on the graph representation of the compound and the feature vector of the new object. If predicted indicates a fall/collapse, the current branch of the search operation is terminated.\n###figure_4### In detail, six different tasks can be specified. The first two tasks correspond to building the tallest and shortest compounds/towers. In order to predict the height of the object compounds, the effect predictions are summed up. The third and fourth tasks correspond to obtaining structures where the placed objects are required to enclose the top part of the object compound and become obstructed in the compound (inserted inside). The accumulated predictions are used for this purpose. The fifth task corresponds to building a tower of a specific height. Finally, the sixth task enables the selection of two objects from the set of objects that will be used in the object compound and puts constraints on their relative placements, such as maximizing or minimizing their relative distances.\n###figure_5### ###table_1###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "IV Experimental Setup",
|
| 69 |
+
"text": "In the real-world experiment setup, we employ a 7-DOF UR10 manipulator equipped with a Robotiq 3-Finger Adaptive Robot Gripper. The objects chosen for the experiment are selected from a variety of toys commonly played with by infants, enabling the exploration of affordance relations involving concave, convex objects, and compounds. As indicated in Table II ###reference_###, there are more small objects than large ones. This is because it becomes infeasible to find a suitable plan for the end-effector of the manipulator when dealing with compounds that are too tall. However, our approach does not impose a limitation on the number of objects in a compound. Positioned 1 meter above the table, centered, a Realsense RGBD camera is installed, with its lens directed downward to optimize capture. The Pybullet environment is used for simulating actions and interactions. A custom gripper is attached to the wrist of the UR10 manipulator in the simulator in order to speed up the pick and place action executions. The objects used in the simulator are created using Blender, taking into account their size information. These objects are depicted in Fig. 5 ###reference_### and Fig. 4 ###reference_###. The depth images of the simulated scene are captured using a virtual depth camera positioned at the same relative location as in the real-world experimental setup.\nDuring experiments, a subset of the inventory is spawned in a rectangular area at random positions. The depth image of the scene is segmented to acquire the depth images of the objects individually. In order to segment the depth image, the lowest values are grouped according to the pixel positions. The image is cropped according to the center pixel positions for each group. The positions of the objects are calculated using the center pixel positions and values and used during pick and place action executions. The trajectory for the manipulator to pick up and place an object is planned using MoveIt in the real-world setup, while built-in inverse kinematics functions are utilized in the simulation. The calculated positions for the objects are used as the positional goal for the end-effector during the pick operation. For the place operation, an additional 15 cm height is added to the goal position to eliminate potential object collisions. The orientation of the end-effector remains the same during operations.\nA data point consists of: 1) a 32x32 single object depth image, 2) 32x32 individual depth images for the objects in the compound, and 3) effects as explained in the Method Section. The depth images for the objects in the compound are derived from previous iterations. The training dataset comprises of 5000 data points acquired from simulation experiments. Prior to training, representations of both single and compound objects are acquired using the pretrained autoencoder.\nA MOGAN model is initialized with two GCNConv layers and three linear layers. The parameter size of the model is 46786 which is empirically found to prevent over fit. The model weights are randomly initialized with a torch seed value of 42. Mean Squared Error (MSE) loss and a custom sign loss are utilized as the loss functions. The sign loss, used for and , penalizes predictions that do not align with the correct signs compared to the ground truth data. The Adam optimizer is employed as optimization algorithm. The model is trained for 600 epoch with a batch size of 1. The learning rate starts from and gradually decreased with the learning rate scheduler. The gamma value is set to 0.95, and the step size is 500.\nTo demonstrate the efficiency of our proposed model, we compared it with a modified version of DeepSym [19 ###reference_b19###]. While Deepsym, a deep encoder-decoder network, learns relational symbols through effect predictions, the model is limited to paired object interactions. In the modified version, we encoded individual depth images and concatenated them. The size of the tensor is the multiplication of the feature size and the maximum object number in a compound. The maximum object number extracted from the dataset is 14 in our case. The remaining part of the input tensor remains 0 for the smaller-sized object compounds. In contrast to DeepSym, we did not utilize the Gumbel-Sigmoid function in the latent space, as our study does not focus on discrete symbol learning. Since we define our actions as adding a new object, we did not query additional actions. The decoder part learns the concatenation of all effects for each node in a compound.\nThe parameter size for the baseline model is 50178, which is close to but not less than the parameter size of our proposed model. Training and test results are compared with the MOGAN model in the Experiments and Results Section.\n###figure_6###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Results",
|
| 75 |
+
"text": "###figure_7### In this section, we analyze the prediction error of our model for the unseen combinations of the composite objects and provide the results in Table III ###reference_###. The errors are grouped according to the compound object sizes to analyze the relation between compound object sizes and prediction errors. Effect 1 is the predicted height differences between two objects, as explained in the Method Section. The inventory contains objects with maximum, minimum, and mean height values of 17 cm, 1.5 cm, and 6.5 cm, respectively. The errors in Effect 1 predictions result in deviations of less than 1 cm in predicted height differences when the compound object size is 8 or less. If the object size exceeds 8, we observe a maximum error of 1.41 cm. Although these prediction errors do not significantly impact the majority of predictions due to the presence of considerably larger objects, they can lead to failures when predicting effects between smaller objects, such as small rings. The error for Effect 2 does not increase along with the compound object size. We can confidently state that our model is capable of predicting x and y displacements of objects without being affected by the number of objects. The ground truth value of Effect 3 is 1 when the tower collides, 0 otherwise. When we inspect the prediction errors for Effect 3, we see that it increases as the number of objects increases. However, the errors in Effect have minimal impact on the overall results due to the margin between ground truth values."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.1",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": " Simulation Experiments & Comparison with Baseline",
|
| 81 |
+
"text": "We evaluate the generated plans for six different tasks, we sample 10 different configurations for each compound object size, ranging from 2 to 5 in the simulator, as shown in Table I ###reference_###. For the fifth task, which is to build a compound object with a desired height, we calculated possible height values for the sampled configuration, selected one as the goal, and compared it to the resulting height. For the last task, we randomly selected two objects from the sampled set of objects to maximize or minimize their distances. Please see the generated and executed plans for a number of sample tasks in Fig 6 ###reference_###. In the 2nd and 3rd rows of Fig 6 ###reference_###, different tasks are assigned for the same set of objects. In the 2nd row, the model benefits the passability of rings onto the pole to keep the compound short. In the 3rd row, the model first benefits the stackability of rings and then stacks the pole to increase the height of the compound.\n###figure_8### Out of 300 planning tasks, our system was able to generate 283 successful plans, as shown in Table I ###reference_###. The success rate was observed to slightly drop when the number of objects increases. This was an expected result, as the number of objects in the compound increase, predicting the affordance of the compound object and how it is affected from placing another object on top becomes more difficult. Additionally, as the number of objects increases, the number of predictions done during the planning increases exponentially. One erroneous prediction among all the correct predictions may cause a failure in planning. It is important to note that our MOGAN model significantly outperformed the base model in planning, as shown in Table I ###reference_###, showing the effectiveness of using graph structures where the features of the objects in the compound are embedded in the nodes of the GNN for modeling multi-object affordances and for the multi-object planning problems.\n###figure_9###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.2",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Real-world Experiments",
|
| 87 |
+
"text": "In the real-world setup, we test our system\u2019s planning capacity with the first two tasks: building the shortest and the tallest compound objects. We sampled 5 sets of objects for the compound object sizes 2, 3, and 4. A number of plan execution snapshots from sampled tasks are provided in Fig.7 ###reference_###. Out of the 30 real-world planning tasks, 28 of the generated plans were found to be successful, as shown in Figure 8 ###reference_###. The system is able to build desired compound objects 1) using the depth images from Realsense, 2) predicting effects with the MOGAN model, 3) planning an optimal path with the tree search algorithm, and 4) executing it with the UR10 manipulator. The success rate slightly decreases as the object number in the inventory increases. Another reason for the failure is the unpredictability of the real-world systems. Since the objects we use are plastic, they exhibit slight elasticity. Therefore, the object models in the simulation do not fully capture the physical properties of real-world objects. This can lead to unexpected results during the gripper\u2019s open and close operations. In Figure 9 ###reference_###, when the robot holds the pole, it grips it too tightly. As a result, when the gripper opens, the pole gets stuck between the fingers and does not fall."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.3",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Building Nonlinear Compounds",
|
| 93 |
+
"text": "In this section, we analyze experiments involving additional stacking actions. These actions include changes in position and orientation. In this setup, the agent can choose from 3 different locations along the x-axis and 2 different orientations (upside and downside), resulting in a total of 6 different actions. Orientation information is included in the feature vector to encode the objects\u2019 orientation, thus generating node features. To connect the nodes, edges are created between them following the Algorithm 1 ###reference_###. function returns if the bounding boxes of the objects intersect. The function determines whether two objects are in contact. If either of these functions returns , an edge is added between the two objects.\nIn this setup, two additional models are designed for comparison with the proposed MOGAN model. The first is the Multi-Object Graph Attentive Affordance Network (MOGAAN), where we utilize GATConv layers instead of GCNConv layers. The second is the Multi-Object Feed Forward Network (MOFFAN), where we use linear layers for encoding. In the latter, we concatenate the object features according to the object selection order. To train and test the models with the new actions, we collected a dataset of 3000 compounds consisting of cubes and cups, using 500 of them for testing. The MOGAN model is initialized as described in Section IV ###reference_###, with the input sizes adjusted to match the feature sizes of the dataset, and the x-axis position conditioned in the latent space. The parameter sizes for the other models are 47618, 48290 respectively. The training parameters for this experiment are also described in Section IV ###reference_###.\n###figure_10### The test errors for this experiment are provided in Table IV ###reference_###. The errors are grouped according to the compound object sizes. The errors for predicted height differences are shown under the Effect 1 column, where the Effect 2 column corresponds to the errors for lateral differences, and the Effect 3 column corresponds to the prediction errors for collapse. The comparisons with MOGAAN and MOFFAN indicate that our model outperforms the baselines, especially as the compound size increases.\nFor this experiment, we designed a task where the goal is to build a compound shaped like a bridge. When a list of 6 objects is presented, the models are required to find the stacking order and orientations for the predefined locations that result in a bridge shape. Out of 10 planning tasks, our proposed model, MOGAN, was able to plan compounds for all the desired tasks, and the plans were executed successfully in the simulation environment. The MOGAAN model could not generate plans for 4 of the tasks. The MOFFAN model generated 10 plans, but 3 of them collapsed in the simulation environment as shown in Table V ###reference_###. Figure 10 ###reference_### shows a comparison of the plans where the MOFFAN fails. While the MOGAN model builds legs of the bridge with similar sizes to stack another object in the middle, the MOFFAN model cannot build legs with similar sizes. This demonstrates that the MOFFAN model lacks the capability to reason about the relationship between leg sizes and stability, whereas our model can reason about the affordances of the bridge legs to construct a stable bridge.\nIn the following part of this section, we analyze two case studies to better understand the capabilities of the MOGAN model.\n###figure_11###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.3.1",
|
| 97 |
+
"parent_section_id": "5.3",
|
| 98 |
+
"section_name": "V-C1 Case Study 1",
|
| 99 |
+
"text": "A compound building process is shown in Figure 11 ###reference_###, along with depiction of the graph structures and predicted effects. The figure shows the effect predictions for the new objects (green circles) in relation to the objects corresponding to the red-circled nodes. In part B of the figure, a cup is stacked onto a rotated cup. The model can reason about the rotations of objects, leading to accurate effect predictions. In parts C,D, and E, it is shown that the MOGAN model can reason about the spatial relations of objects in graphs with various edge connections, leading to accurate effect predictions when a new object is to be stacked."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.3.2",
|
| 103 |
+
"parent_section_id": "5.3",
|
| 104 |
+
"section_name": "V-C2 Case Study 2",
|
| 105 |
+
"text": "In this experiment, while building a compound, a rotated cup covers a smaller cup as shown in part C of Figure 12 ###reference_###. Note that for all objects with different orientations, depth images are taken in the initial position and features are extracted by the autoencoder. Then, the object\u2019s orientation information is appended to the feature vector. The MOGAN model is able to learn the continuous effects and for the \u201ccovering\u201d effect, as shown in part C of the figure, where the predicted effects indicate intersecting bounding boxes, with the top and side of the red-circled object remaining inside the newly added green-circled object. Additionally, the model predicts effect as before the stacking action when the height difference between the subcompounds is too large to place a new object in the middle, demonstrating its ability to reason about spatial relations between nodes in a graph to predict the collapse effect, as shown in part D of Figure 12 ###reference_###.\n###figure_12###"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6",
|
| 109 |
+
"parent_section_id": null,
|
| 110 |
+
"section_name": "VI Conclusion",
|
| 111 |
+
"text": "In this research, we proposed a novel Multi-Object Graph Affordance Network, MOGAN, which models affordances of compound objects for manipulation and planning.\nWe showed that our system was able to correctly predict the affordances of compound objects that include spheres, cups, poles, and several rings that enclose the poles. This prediction capability was effectively used to build different structures via planning structures of highly complex affordances.\nIn the future, we plan to discover symbolic affordances of compound structures and utilize AI planners for task realization. Additionally, to enable an end-to-end and generalizable approach, we intend to use point clouds and RGB images to represent our objects and depth values to retrieve effect of actions."
|
| 112 |
+
}
|
| 113 |
+
],
|
| 114 |
+
"appendix": [],
|
| 115 |
+
"tables": {
|
| 116 |
+
"1": {
|
| 117 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>comparison of plan success rates in the simulation environment with the multi-object deepsym as baseline (mds)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.1\">Size</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.2.1\">Task 1: Tallest</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.3.1\">Task 2: Shortest</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.4.1\">Task 3: Occluded</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.5.1\">Task 4: Occluding</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.1.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.6.1\">Task 5: Specific Height</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.1.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.7.1\">Task 6: Condition</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.1\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.2\">MDS</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.3\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.4\">MDS</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.5\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.6\">MDS</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.7\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.8\">MDS</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.9\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.10\">MDS</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.11\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.12\">MDS</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.1\">2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.2\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.3\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.4\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.5\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.6\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.7\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.8\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.9\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.10\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.11\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.12\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.3.13\">100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.1\">3</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.2\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.3\">60</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.4\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.5\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.6\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.7\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.8\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.9\">70</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.10\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.11\">60</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.12\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.4.13\">60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.1\">4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.2\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.3\">50</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.4\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.5\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.6\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.7\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.8\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.9\">60</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.10\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.11\">60</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.12\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.5.13\">60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.1\">5</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.2\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.3\">10</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.4\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.5\">60</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.6\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.7\">40</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.8\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.9\">60</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.10\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.11\">50</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.12\">90</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.6.13\">60</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 118 |
+
"capture": "TABLE I: comparison of plan success rates in the simulation environment with the multi-object deepsym as baseline (mds)"
|
| 119 |
+
},
|
| 120 |
+
"2": {
|
| 121 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>object details</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.1\">Name</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.2\">Size (Height, Width, Depth) (cm)</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.3\">Number</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.1\">Pole</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.2\">(17, 14, 14)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.2.1\">Ball</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.2.2\">(5, 5, 5)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.2.3\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.1\">Cube</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.2\">(10, 10, 10)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.5.4.1\">Ring</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.5.4.2\">(3, 12, 12)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.5.4.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.6.5.1\">Ring</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.6.5.2\">(2.5, 10.5, 10.5)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.6.5.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.7.6.1\">Ring</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.7.6.2\">(2.4, 9.7, 9.7)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.7.6.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.8.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.8.7.1\">Ring</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.8.7.2\">(2, 9, 9)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.8.7.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.9.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.9.8.1\">Ring</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.9.8.2\">(1.5, 8, 8)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.9.8.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.10.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.10.9.1\">Cup</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.10.9.2\">(10, 10.5, 10.5)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.10.9.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.11.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.11.10.1\">Cup</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.11.10.2\">(8.5, 7.5, 7.5)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.11.10.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.12.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.12.11.1\">Cup</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.1.12.11.2\">(7.5, 6.5, 6.5)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.1.12.11.3\">1</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 122 |
+
"capture": "TABLE II: object details"
|
| 123 |
+
},
|
| 124 |
+
"3": {
|
| 125 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>prediction errors for the unseen simulation data <span class=\"ltx_text\" id=\"S4.T3.2.1\">in decimeters</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.3.1.1.1.1\">Tower Size</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T3.3.1.1.2\">Test Errors</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.2.2.1\">Effect 1 (dm)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.2.2.2\">Effect 2 (dm)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.2.2.3\">Effect 3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.3.3.1\">1</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.3.3.2\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.3.3.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.3.3.4\">0.010</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.4.4.1\">2</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.4.4.2\">0.008</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.4.4.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.4.4.4\">0.099</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.5.5.1\">3</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.5.5.2\">0.022</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.5.5.3\">0.004</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.5.5.4\">0.149</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.6.6.1\">4</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.6.6.2\">0.043</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.6.6.3\">0.003</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.6.6.4\">0.177</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.7.7.1\">5</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.7.7.2\">0.063</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.7.7.3\">0.002</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.7.7.4\">0.177</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.8.8.1\">6</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.8.8.2\">0.085</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.8.8.3\">0.002</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.8.8.4\">0.184</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.9.9.1\">7</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.9.9.2\">0.093</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.9.9.3\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.9.9.4\">0.240</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.10.10.1\">8</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.10.10.2\">0.109</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.10.10.3\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.10.10.4\">0.145</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.11.11.1\">9</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.11.11.2\">0.126</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.11.11.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.11.11.4\">0.185</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.12.12.1\">10</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.12.12.2\">0.141</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.12.12.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.12.12.4\">0.231</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.13.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.13.13.1\">11</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.13.13.2\">0.134</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.13.13.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.13.13.4\">0.306</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.14.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.14.14.1\">12</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.14.14.2\">0.123</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.14.14.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.14.14.4\">0.322</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.15.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.15.15.1\">13</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.15.15.2\">0.092</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.15.15.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.3.15.15.4\">0.330</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.16.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.16.16.1\">14</th>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.3.16.16.2\">0.108</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.3.16.16.3\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.3.16.16.4\">0.252</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 126 |
+
"capture": "TABLE III: prediction errors for the unseen simulation data in decimeters"
|
| 127 |
+
},
|
| 128 |
+
"4": {
|
| 129 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>prediction errors for the unseen simulation data <span class=\"ltx_text\" id=\"S5.T4.2.1\">in decimeters<span class=\"ltx_text\" id=\"S5.T4.2.1.1\"> </span></span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.3.1.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T4.3.1.1.1.1\">Tower Size</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"9\" id=\"S5.T4.3.1.1.2\">Test Errors</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T4.3.2.2.1\">Effect 1 (dm)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T4.3.2.2.2\">Effect 2 (dm)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T4.3.2.2.3\">Effect 3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.1\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.2\">MOGAAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.3\">MOFFAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.4\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.5\">MOGAAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.6\">MOFFAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.7\">MOGAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.8\">MOGAAN</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3.9\">MOFFAN</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.1\">1</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.2\">0.036</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.3\">0.037</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.4\">0.035</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.5\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.6\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.7\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.8\">0.013</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.9\">0.010</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.4.4.10\">0.011</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.1\">2</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.3.5.5.2.1\">0.041</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.3\">0.042</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.4\">0.050</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.5\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.6\">0.002</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.7\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.8\">0.019</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.9\">0.0535</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.5.5.10\">0.015</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.1\">3</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.3.6.6.2.1\">0.023</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.3.6.6.3.1\">0.023</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.4\">0.156</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.5\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.6\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.7\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.8\">0.0521</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.9\">0.204</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.3.6.6.10\">0.033</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.1\">4</th>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.3.7.7.2.1\">0.044</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.3\">0.073</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.4\">0.067</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.5\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.6\">0.001</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.7\">0.000</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.3.7.7.8.1\">0.186</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.9\">0.253</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.3.7.7.10\">0.323</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 130 |
+
"capture": "TABLE IV: prediction errors for the unseen simulation data in decimeters "
|
| 131 |
+
},
|
| 132 |
+
"5": {
|
| 133 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span>comparison of plan success rates with mogaan and moffan</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.1.2\">Planning Success</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.1.3\">No Solution</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.1.4\">Failure</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T5.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.1.2.1.1\">MOGAN</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.2.1.2\">100</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.2.1.3\">0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.2.1.4\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.1.3.2.1\">MOGAAN</th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.3.2.2\">60</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.3.2.3\">40</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.3.2.4\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.1.4.3.1\">MOFFAN</th>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.1.4.3.2\">70</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.1.4.3.3\">0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.1.4.3.4\">30</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 134 |
+
"capture": "TABLE V: comparison of plan success rates with mogaan and moffan"
|
| 135 |
+
}
|
| 136 |
+
},
|
| 137 |
+
"image_paths": {
|
| 138 |
+
"1": {
|
| 139 |
+
"figure_path": "2309.10426v4_figure_1.png",
|
| 140 |
+
"caption": "Figure 1: Execution of the plan generated using our MOGAN model to build the shortest compound object given a pole and two rings in the real world setup. The agent uses the pole as the base and stacks the rings, as they do not change the height of the compound.",
|
| 141 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/realrobotfirstlast.jpg"
|
| 142 |
+
},
|
| 143 |
+
"2": {
|
| 144 |
+
"figure_path": "2309.10426v4_figure_2.png",
|
| 145 |
+
"caption": "Figure 2: MOGAN: Multi-Object Graph Affordance Network Architecture, along with the pretrained autoencoder. The depth images of single objects are encoded with the autoencoder. It then constructs the graph representation of the compound object. The proposed model, MOGAN, extracts meaningful features from the graph and predicts the resulting effect between a single object and a queried object within the compound object. The predicted effect is visually depicted in the rightmost image with a dashed green circle.",
|
| 146 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/IMG_0700.png"
|
| 147 |
+
},
|
| 148 |
+
"3": {
|
| 149 |
+
"figure_path": "2309.10426v4_figure_3.png",
|
| 150 |
+
"caption": "Figure 3: Visualization of the calculation of lateral spatial displacements: Imaginary rays are projected through the center of the new object. Red points illustrate the intersections with both the compounding object and newly added object. The black arrows are calculated by the function s\ud835\udc60sitalic_s.",
|
| 151 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/effect2.jpeg"
|
| 152 |
+
},
|
| 153 |
+
"4": {
|
| 154 |
+
"figure_path": "2309.10426v4_figure_4.png",
|
| 155 |
+
"caption": "Figure 4: A PyBullet environment featuring a UR10 robot and various objects, including cubes, poles, balls, cups, and rings.",
|
| 156 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/simenv.jpeg"
|
| 157 |
+
},
|
| 158 |
+
"5": {
|
| 159 |
+
"figure_path": "2309.10426v4_figure_5.png",
|
| 160 |
+
"caption": "Figure 5: Various objects used in the real-world setup: a pole, rings, cups, a cube, and balls.",
|
| 161 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/real_objects.jpeg"
|
| 162 |
+
},
|
| 163 |
+
"6": {
|
| 164 |
+
"figure_path": "2309.10426v4_figure_6.png",
|
| 165 |
+
"caption": "Figure 6: A number of sample plan executions in the simulator. The tasks are (1) to minimize the invisibility of the given objects, (2) to build the shortest compound object using a pole and different sized rings, (3) to build the tallest compound object using a pole and different sized rings, and (4) to build a compound object given a constraint between the pink and dark green cups.",
|
| 166 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/IMG_0405.png"
|
| 167 |
+
},
|
| 168 |
+
"7": {
|
| 169 |
+
"figure_path": "2309.10426v4_figure_7.png",
|
| 170 |
+
"caption": "Figure 7: A number of snapshots from real-world planning experiments. In the first, second, and fourth images, the objective is to construct the shortest compound objects. In the third image, the goal is to create the tallest compound object. The system observes the scene, predicts the effects of each potential plan using MOGAN, and executes the optimal one.",
|
| 171 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/realrobot4examples.jpg"
|
| 172 |
+
},
|
| 173 |
+
"8": {
|
| 174 |
+
"figure_path": "2309.10426v4_figure_8.png",
|
| 175 |
+
"caption": "Figure 8: Plan success rates in the real world. The goals are to build the shortest and tallest compound objects. 5 trials were conducted for each set of different sizes.",
|
| 176 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/realworld_grid.png"
|
| 177 |
+
},
|
| 178 |
+
"9": {
|
| 179 |
+
"figure_path": "2309.10426v4_figure_9.png",
|
| 180 |
+
"caption": "Figure 9: A failure case is illustrated here. When the goal is to build the tallest compound, the MOGAN model chooses to place the yellow ring and the pole on top of the pink ring, respectively. However, the pole gets squeezed between the fingers of the 3-finger gripper, resulting in failure, as shown in part F.",
|
| 181 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/IMG_failure.png"
|
| 182 |
+
},
|
| 183 |
+
"10": {
|
| 184 |
+
"figure_path": "2309.10426v4_figure_10.png",
|
| 185 |
+
"caption": "Figure 10: Three experiments where the MOFFAN model fails are shown in comparison to the MOGAN model. The MOGAN model can reason about the lengths of the legs of a compound to successfully stack another object onto the middle, whereas the MOFFAN model cannot.",
|
| 186 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/moganvsmoffan.png"
|
| 187 |
+
},
|
| 188 |
+
"11": {
|
| 189 |
+
"figure_path": "2309.10426v4_figure_11.png",
|
| 190 |
+
"caption": "Figure 11: An example of online graph generation and effect prediction for a new test set containing six different stacking actions is shown. The rotation of the objects is encoded in the node features, while the x-axis position is conditioned in the latent space. In the image, the effects of the green-circled object corresponding to the red-circled objects are displayed.",
|
| 191 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/bridge_last.png"
|
| 192 |
+
},
|
| 193 |
+
"12": {
|
| 194 |
+
"figure_path": "2309.10426v4_figure_12.png",
|
| 195 |
+
"caption": "Figure 12: An example of online graph generation and effect prediction for collision detection is provided. Since the pink cup covers the gray cup, the lengths of the compound\u2019s legs vary, preventing further stacking.",
|
| 196 |
+
"url": "http://arxiv.org/html/2309.10426v4/extracted/6074796/materials/covering_last.png"
|
| 197 |
+
}
|
| 198 |
+
},
|
| 199 |
+
"validation": true,
|
| 200 |
+
"references": [],
|
| 201 |
+
"url": "http://arxiv.org/html/2309.10426v4"
|
| 202 |
+
}
|
20241217/2310.00074v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2311.02691v2.json
ADDED
|
@@ -0,0 +1,276 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Age of Information Analysis for CR-NOMA Aided Uplink Systems with Randomly Arrived Packets",
|
| 3 |
+
"abstract": "This paper studies the application of cognitive radio inspired non-orthogonal multiple access (CR-NOMA) to reduce age of information (AoI) for uplink transmission.\nIn particular, a time division multiple access (TDMA) based legacy network is considered, where each user is allocated with a dedicated time slot to transmit its status update information. The CR-NOMA is implemented as an add-on to the TDMA legacy network, which enables each user to have more opportunities to transmit by sharing other user\u2019s time slots. A rigorous analytical framework is developed to obtain the expressions for AoIs achieved by CR-NOMA with and without re-transmission, by taking the randomness of the status update generating process into consideration. Numerical results are presented to verify the accuracy of the developed analysis. It is shown that the AoI can be significantly reduced by applying CR-NOMA compared to TDMA.\nMoreover, the use of re-transmission is helpful to reduce AoI,\nespecially when the status arrival rate is low.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "With a rapid development of wireless communications, establishing ubiquitous connectivity for massive machine type communications (mMTC) becomes feasible [2 ###reference_b2###, 3 ###reference_b3###]. Under this background, more and more real-time applications for monitoring and controlling are emerging, e.g., autonomous driving. In these scenarios,\ninformation sources (such as sensors) need to frequently transmit their status updates to destinations,\nin order to keep the status information collected by the destinations as freshness as possible. Because the fresher the status information is, the more conducive it is for making correct decisions. To this end, the concept of age of information (AoI) has been recently proposed as a new metric to characterize the timeliness of status updating systems [4 ###reference_b4###]. In particular, AoI is defined as the time duration of the newest status update observed at the receiver since its generation. Existing literature shows that minimizing AoI is not equivalent to maximizing utilization (throughput) or minimizing status packet delivery delay. Due to the above reasons,\nthe study of AoI has raised considerable attention from both academia and industry [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###].\nThe AoI in single-source scenarios has been extensively investigated in the literature [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. However, in multi-source scenarios, due to the limited degrees of freedom (DoF) for wireless transmission, the devices need to share the channel resource blocks to complete their status updating transmissions. As a result, the AoI achievable for a certain source depends heavily on the adopted multiple access (MA) technique, which determines how channel resource blocks are allocated to multiple users [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###]. Orthogonal multiple access (OMA) is a very straightforward way to avoid inter-user interferences, and has been widely used in communication networks.\nIn [15 ###reference_b15###], the achievable AoIs for time division multiple access (TDMA) and frequency division multiple access (FDMA) were investigated, which shows that TDMA outperforms FDMA in terms of average AoI, while\nFDMA is better in terms of stability under time-varying channels.\nDifferent from OMA, non-orthogonal multiple access (NOMA) allows multiple users to transmit signals simultaneously by occupying the same channel resource block. It is shown by the literature that, compared to OMA, NOMA is more spectral efficient, and more supportive for massive connectivity and low latency [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. Therefore, it is important to investigate the role of NOMA to reduce AoI in status updating systems [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###].\nIn [19 ###reference_b19###] and [20 ###reference_b20###], dynamic policies to switch between NOMA and OMA were developed to minimize AoI.\nIn [21 ###reference_b21###], the achievable peak AoI for NOMA with the first-come-first-serve (FCFS) queuing rule was studied, where sources are ordered according to their distances to the base station.\nIn [22 ###reference_b22###], AoI was optimized for a reconfigurable intelligent surface\n(RIS) assisted NOMA network by using tools from reinforcement learning.\nIn [23 ###reference_b23###], NOMA based AoI in low earth orbit (LEO) satellite-terrestrial integrated networks was\ninvestigated, where average AoI minimization in terrestrial\nnetworks and average AoI minimization among satellites were both considered.\nIn [25 ###reference_b25###], cognitive radio inspired NOMA (CR-NOMA), as a very important form of NOMA, has also been applied to reduce AoI in status updating systems.\nThe key idea of CR-NOMA is that one user have additional transmission opportunities as a secondary user by sharing other users\u2019 resource blocks. Compared to existing NOMA schemes considered in [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###], a very appealing feature of CR-NOMA is\nits simplicity of implementation. Specifically, CR-NOMA can be implemented as a simple add-on to a legacy network based on OMA, with very limited modifications to the legacy network.\nIt has been shown by [25 ###reference_b25###] and [26 ###reference_b26###], CR-NOMA can play an important role to reduce AoI.\nParticularly, in [25 ###reference_b25###], CR-NOMA was implemented over a time division multiple access (TDMA) based legacy network, where each user is allocated with a single dedicated time slot in each frame, and each user is offered one additional opportunity to transmit within each frame by sharing its partner\u2019s time slot. Two data generation models were considered, namely generate-at-will (GAW) and generate-at-request (GAR), where\nthe GAW model assumes that a new status update is generated right before each transmit time slot, and the GAW\nmodel assumes that a new status update is generated right at the beginning of each frame.\nHowever, GAW and GAR are ideal models, which might be unrealistic in many practical scenarios where the status data is generated randomly. To the author\u2019s best knowledge, how to characterize the AoI performance of the CR-NOMA\nassisted status updating system with random arrivals is still open, which motivates this paper.\nThis paper aims to investigate the average AoI achievable for the CR-NOMA assisted status updating system when status arrives randomly. Similar to [25 ###reference_b25###], a TDMA based legacy network is considered, based on which CR-NOMA\nis carried out as an add-on. The main contributions of this paper are listed as follows.\nDifferent from the existing work [25 ###reference_b25###] which adopts an ideal data generation model, this paper\nconsiders a more general model by capturing the randomness of the data generation. As a result, the queuing process of the waiting status data packets has to be considered, which is a new challenging problem compared to\nthe GAW and GAR models. Since only the newest status data affects the AoI at the receiver, this paper considers the\ncommonly used last-come-first-serve (LCFS) queuing rule. Besides, the strategies with and without retransmission are also considered in the paper, which is also a new challenging problem compared to the GAW and GAR models.\nThrough rigorous derivation, closed-form expressions for the average AoIs achieved by CR-NOMA with and\nwithout re-transmission (termed \u201cNOMA-NRT\u201d and \u201cNOMA-RT\u201d) are obtained. Besides, for the comparison purpose, analyses for TDMA based schemes are also provided. Note that, compared to the analyses for the GAW and GAR models,\nthe analysis for random arrival model is much more challenging, especially for NOMA-RT, due to the fact that\neach user\u2019s queuing buffer state and data transmission reliability are coupled with those of its partner.\nSimulation results are provided to validate the developed analytical results. Comparisons of the considered CR-NOMA schemes with the existing TDMA based schemes are also provided. It is shown that the achievable average AoI can be significantly reduced by applying CR-NOMA. Furthermore, retransmission is necessary to reduce AoI, especially when the data arrival rate is low. Moreover, the impact of system parameters on AoI, such as the data arrival rate and the duration of a time slot, has also been demonstrated and discussed.\nThe remainder of this paper is organized as follows. In Section II, the system model and the considered transmission strategy are described. In Section III, analytical frameworks are developed to characterize the average AoI achieved by the considered transmission strategies. Simulation results are presented in Section IV. Finally, the paper is concluded in Section V."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II System model",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Update arrival and queuing process",
|
| 21 |
+
"text": "Consider a wireless communication scenario, where sources send their status updates to one receiver and each source is denoted by , . Random arrivals are considered for the status generation process. Specifically, status updates arrive at source \naccording to a one-dimensional Poisson process with parameter 111Although this paper focuses on Poisson process for modeling the random arrivals, it is noteworthy that the developed analytical framework is also applicable to other arrival models, such as the Bernoulli model [27 ###reference_b27###] and GAW model [25 ###reference_b25###].\nIt is assumed that each status update packet contains bits.\nThe channel resources are divided into consecutive time slots and are allocated to the sources.\nThe considered time slot allocation rules will be discussed later. Note that each source can only transmit its status update through the assigned transmitting time slots. Last-come-first-served (LCFS) queuing is considered at each source.\nSpecifically, each source maintains a buffer with size one to save the latest update to be transmitted. If a new status update arrives at a source, the source will put the new update into the buffer by dropping the previously saved update information. At the beginning of each transmitting time slot, each source moves the status information saved in its queuing buffer to its transmitter, meanwhile the queuing buffer is set to be empty to accommodate future updates. It is noteworthy that if a new update comes during the transmitting time slot, it can be pushed into the queuing buffer, but it does not affect the transmission of the transmitted data."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Multiple Access Strategies",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.2.1",
|
| 31 |
+
"parent_section_id": "2.2",
|
| 32 |
+
"section_name": "II-B1 TDMA",
|
| 33 |
+
"text": "This paper considers TDMA as the benchmark multiple access strategy. Specifically, the timeline is divided into consecutive time frames. In each TDMA time frame, each source is allocated a single time slot with duration .\nWithout loss of generality, the -th time slot in each frame is allocated to .\nEach source is allowed to transmit update information to the receiver only within the assigned time slot, if it has an\nupdate data to transmit. Therefore, the achievable data rate of in the -th time slot of frame is given by:\nwhere is the transmit power, denotes the channel of in the -th time slot of frame . Note that, without loss of generality, the noise power is normalized in this paper."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.2.2",
|
| 37 |
+
"parent_section_id": "2.2",
|
| 38 |
+
"section_name": "II-B2 CR-NOMA",
|
| 39 |
+
"text": "CR-NOMA can be used as an add-on to TDMA to improve the freshness of the data collected at the receiver.\nParticularly, in CR-NOMA, and are paired together to form a NOMA group, where and . In each NOMA group, the paired users can share the channel resource block with each other.\nSpecifically, in the -th time slot of frame , and are treated as the primary user and secondary user, respectively. Note that, transmits its signal with power as in TDMA, if it has update information to transmit.\nMeanwhile, can also transmit signal within the time slot by applying NOMA, if it has the updated information to transmit. The application of NOMA is transparent to the primary user. To this end, the secondary user\u2019s signal is decoded at the first stage of SIC 222Although fixed SIC order is adopted in this paper, it is worth pointing out that advanced SIC methods, such\nas hybrid SIC is helpful to further reduce AoI for the considered system [28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###]., which can ensure that the primary user achieves the same transmission reliability as in TDMA. Hence, the achievable data rate of in the -th time slot of frame is given by:\nwhere is the transmit power of the secondary user, and is an indicator variable to\ndenote whether source transmits a signal in the -th time slot of frame .\nSimilarly, in the -th time slot of frame , is treated as the primary user and is the secondary user. Following the same transmission strategy aforementioned above, achieves the same transmission performance as in TDMA, and transmits signal opportunistically, yielding the following achievable data rate:\nOne appealing feature of the considered CR-NOMA is explained as follows.\nFor the considered CR-NOMA scheme, when transmits in the -th slot, its transmission success probability is given by:\nwhere , which is the same as the transmission success probability of in the TDMA mode.\nSimilarly, for the considered CR-NOMA scheme, when transmits in the -th slot, its transmission success probability is given by:\nwhich is also the same as the transmission success probability of if TDMA is adopted.\nBesides, the application of CR-NOMA can also help to improve the bandwidth utilization.\nThe reasons are as follows. In TDMA schemes, each source is allocated with\na dedicated slot. If the source has no packet to transmit in a slot, then this slot cannot be utilized, resulting in a low utilization. However, when the considered CR-NOMA schemes are applied,\nif source has no packet to transmit in the -th slot, the -th slot can still be possibly utilized by\nsource . Thus, the bandwidth utilization can be improved by the considered CR-NOMA compared to TDMA.\nIn this paper, perfect channel state information (CSI) and ideal SIC are assumed for simplifying the analysis. However, it is noteworthy that the developed analytical framework can be extended to the cases where imperfect CSI and SIC are considered [31 ###reference_b31###]. Besides, to facilitate CSI estimation, it is necessary to set a mini-slot before the each user\u2019s transmission slot for pilot transmission, which may increase the AoI. However, due to reason that the length of the mini-slot is usually much shorter than that of the user\u2019s transmission slot, the impact of the CSI acquisition on AoI can be ignored."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "II-C With and without re-transmission",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.3.1",
|
| 49 |
+
"parent_section_id": "2.3",
|
| 50 |
+
"section_name": "II-C1 Without re-transmission",
|
| 51 |
+
"text": "at the end of each transmitting slot, the transmitted data will be discarded by the transmitter, regardless of whether the data transmission is successful or not."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "2.3.2",
|
| 55 |
+
"parent_section_id": "2.3",
|
| 56 |
+
"section_name": "II-C2 With re-transmission",
|
| 57 |
+
"text": "at the end of each transmitting slot, if the signal is not successfully transmitted and there is no new status update data arrives, the transmitted data will be moved back into the queuing buffer for re-transmission. Otherwise, the transmitted data will be discarded. Note that, if a newer status update comes before the next transmitting time slot, the previously received update data will be discarded, in order to improve the freshness of the data collected at the source 333Please note that the utilization of retransmissions may result in higher energy consumption. In this paper, it is assumed that the additional energy consumption is affordable for the source, where the timeliness of the information is more important..\nFor notational convenience, the TDMA scheme with and without re-transmission is termed \u201cTDMA-RT\u201d and \u201cTDMA-NRT\u201d, respectively. And the NOMA scheme with and without re-retransmission is termed \u201cNOMA-RT\u201d and \u201cNOMA-NRT\u201d, respectively.\nSome modifications to the legacy TDMA network is necessary for the implementation of CR-NOMA. First, if re-transmission strategy is adopted, the receiver needs to carry out a one-bit feedback to the users at the end each slot, to inform the users whether the re-transmission is needed. Besides, to ensure that the transmission of the secondary user to the corresponding primary user is transparent, the receiver needs to feedback the permitted maximal\ndata rate shown in (2) (or (3)) to the secondary user."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "2.4",
|
| 61 |
+
"parent_section_id": "2",
|
| 62 |
+
"section_name": "II-D Performance metric",
|
| 63 |
+
"text": "###figure_1### In this paper, the age of information (AoI) is used as the performance metric to evaluate the freshness of the\nlatest update which has been successfully delivered to the receiver. Note that, only the status updates which are successfully delivered to the receiver affect the AoI. For a specific source , the generation time of the\n-th successfully delivered status update packet is denoted by , and its corresponding arrival time\nat the receiver is denoted by .\nThe instantaneous AoI of source \u2019s update at the receiver is a time varying function, which is denoted by and determined by the time difference between the current time and the generation time of the newest status update information observed at the receiver. Let denote the index of the newest update observed at the receiver, then the instantaneous AoI of can be expressed as:\nwhere is the generation time of the newest status update information observed at the receiver.\nNote that, the age process forms a sawtooth path as illustrated in Fig. 1 ###reference_###.\nThe average AoI of is defined as the average of AoI over time, which can be expressed as [5 ###reference_b5###]:\nThe evaluation of can be described as follows.\nFor the ease of exposition, denote by the interval between the -th and ()-th\nsuccessful delivery, and by the system time of a successfully delivered update.\nIt can be straightforwardly verified that the evaluation of the average AoI is equivalent to find the sum of a\nseries of trapezoidal areas, denoted by [6 ###reference_b6###]. As shown in Fig. 1 ###reference_###, where\nThen, the average AoI can be expressed as [8 ###reference_b8###]:\nFurther, when is a stationary and ergodic process, the evaluation of can be simplified as:\nNote that discrete AoI metrics have been widely adopted in the literature for simplification.\nHowever, such metrics are not applicable for this paper. Because random packet arrival model is considered in this paper, which means that the new status update packet may arrives at any instant of a frame. Thus, it is necessary for\nthe AoI metric to have the capability to quantify fractional duration of a time slot, which excludes the utilization\nof discrete AoI metrics."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "III Analysis on AoI for TDMA-NRT, NOMA-NRT, TDMA-RT and NOMA-RT",
|
| 69 |
+
"text": "In this section, the average AoIs achieved by the TDMA-NRT, NOMA-NRT, TDMA-RT and NOMA-RT schemes are analyzed, respectively. Due to the symmetry among users, it is sufficient to focus on a particular user, say user .\nCompared to the schemes with retransmission, the analyses for TDMA-NRT and NOMA-NRT are relatively easier, since the whole time line can be split into consecutive and independent parts.\n###figure_2###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.1",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "III-A AoI analysis for TDMA-NRT",
|
| 75 |
+
"text": "The average AoI achieved by the considered TDMA-NRT scheme can be characterized by the following theorem.\nThe average AoI achieved by the considered TDMA-NRT scheme, denoted by , can be expressed as:\nwhere .\nPlease refer to Appendix A.\n\u220e"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.2",
|
| 79 |
+
"parent_section_id": "3",
|
| 80 |
+
"section_name": "III-B AoI analysis for NOMA-NRT",
|
| 81 |
+
"text": "For the TDMA-NRT scheme, the status updating process can be divided into statistically identical and\nindependent phases with equal duration . Hence, the derivation of the average AoI for TDMA-NRT can be significantly\nsimplified by utilizing the aforementioned property. Different from the TDMA-NRT scheme, where each source has only\none chance to transmit in a frame, the NOMA-NRT scheme offers an additional transmission chance for source\n by using the -th slot. As a result, the status updating process in NOMA-NRT can be divided into consecutive\nphases with duration . Although it can be easily proved that the status updatings in these phases are\nalso statistically independent, the probabilities of a successful update in the -th and -th slot in a frame\nare different. As a consequence, it is necessary to take into account where a successful updates happens for the derivation of the average AoI, which is the main challenge caused by NOMA-NRT compared to TDMA-NRT. The average AoI achieved by the considered NOMA-NRT scheme can be characterized by the following theorem.\nThe average AoI achieved by the considered NOMA-NRT scheme, denoted by , can be expressed as:\nwhere , , , , , , and\nPlease refer to Appendix B.\n\u220e"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.3",
|
| 85 |
+
"parent_section_id": "3",
|
| 86 |
+
"section_name": "III-C AoI analysis for TDMA-RT",
|
| 87 |
+
"text": "The average AoI achieved by the considered TDMA-RT scheme can be characterized by the following theorem.\nThe average AoI achieved by the considered TDMA-RT scheme, denoted by , can be expressed as:\nwhere , .\nPlease refer to Appendix C.\n\u220e\nThe main difference between the analysis for TDMA-NRT and TDMA-RT schemes is that there\u2019s time\ncorrelation of the status updating process in TDMA-RT. Because whether there\u2019s packet to transmit in the current slot\ndepends on not only the arrival of new packet in the interval, but also whether there\u2019s packet\nwhich was not successfully transmitted in the last frame. By using tools from the Markov chain theory,\nthe average AoI achieved by TDMA-RT can be obtained, as highlighted in the following theorem."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "3.4",
|
| 91 |
+
"parent_section_id": "3",
|
| 92 |
+
"section_name": "III-D AoI analysis for NOMA-RT",
|
| 93 |
+
"text": "The analysis for NOMA-RT scheme is the most challenging among the considered four schemes. The reasons\nare mainly as follows:\nthere\u2019s time correlation for the status updating process of a single source. Because, whether there\u2019s packet\nto transmit in the current slot is affected by the updating status of the last transmission slot.\nthere\u2019s also correlation between paired sources. The reason is that the transmission success of a\nsecondary user depends on the primary user\u2019s transmission.\nThe single-user time correlation and the inter-user correlation are coupled as illustrated by Fig. 14 ###reference_###.\nTo derive the average AoI achieved by source in NOMA-RT, it is necessary to first derive the expression for\nthe transmission success probability under the stationary state of the status updating process.\nFor the NOMA-RT scheme, conditioning on the steady state of the status updating process, the transmission success probability when transmits in the -th time slot, can be expressed as the solution of the following equation:\nwhere , and\nand , , , , ,\n, ,\n, .\nPlease refer to Appendix D.\n\u220e\nThe average AoI achieved by the considered NOMA-RT scheme, denoted by , can be expressed as:\nwhere\nand , , , and are shown in (14 ###reference_###), (15 ###reference_###) and (16 ###reference_###), respectively.\nPlease refer to Appendix E.\n\u220e"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "IV Numerical Results",
|
| 99 |
+
"text": "In this section, numerical results are presented to verify the accuracy of the developed analysis, and also demonstrate AoI performance achieved by the considered TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes.\n###figure_3### ###figure_4### Fig. 3 ###reference_### shows the average AoI achieved by TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes.\nThe simulations results are obtained by averaging over consecutive frames. It can be clearly observed from both Fig. 3 ###reference_###(a) and Fig. 3 ###reference_###(b), simulation results perfectly match the analytical results for all the considered schemes, which validates the accuracy of the developed analysis.\nBesides, it can be seen from Fig. 3 ###reference_###(a) and Fig. 3 ###reference_###(b) that the average AoIs achieved by NOMA-NRT and NOMA-RT schemes outperform their TDMA counterparts.\n###figure_5### ###figure_6### Fig. 4 ###reference_### demonstrates the impact of the packet arrival\nrate on the average AoI achieved by TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes, respectively. As shown in the figure, at low arrival rates, the AoIs achieved by TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT decrease rapidly with the increase of arrival rates. In contrast, at high arrival rates, the AoIs achieved by the four schemes approach a constant, respectively. Another interesting observation is that, for both cases with and without retransmission, the gap between the AoIs achieved by CR-NOMA and TDMA at a high arrival rate is much larger than that at a low arrival rate. This is because at a low arrival rate, the AoI is significantly limited by the arrival rate, while at a high arrival rate, the AoI is limited more by the opportunities to transmit status updates.\n###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### Figs. 5 ###reference_###-7 ###reference_### show the impact of the duration of a time slot on the AoI achieved by TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes under different values of the number of users, packet size and data arrival rate, respectively.\nAs shown in the three figures, for both cases with and without retransmission, the AoIs achieved by both TDMA and CR-NOMA first decrease with and then increase. This observation can be explained by the following two facts. On the one hand, as increases, the frame length will increase, which is unfavorable for reducing AoI. On the other hand, as increases,\nthe transmission reliability can be increased, which is beneficial for reducing\nAoI. Hence, for a small , the dominant factor for reducing AoI is the transmission reliability, and as a result, increasing can help to reduce the AoI.\nBesides, when is sufficiently large, the dominant limitation for reducing AoI becomes the length of each frame, and as a result, increasing yields a larger AoI.\nIt can also be seen from Figs. 5 ###reference_###-7 ###reference_### that the optimal value of is affected by the number of users, the packet sizes and the data arrival rates, due to the fact that these parameters also affect the frame length and transmission reliability. However, the impacts of the aforementioned three factors on the optimal value of are coupled with each other, which makes it difficult to analyze the optimal value of .\n###figure_13### ###figure_14### ###figure_15### Fig. 8 ###reference_### and Fig. 9 ###reference_### show the comparison of TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT in terms of average AoI. As shown in Fig. 8 ###reference_### and 9 ###reference_###, the NOMA-NRT and NOMA-RT schemes outperform the TDMA-NRT and TDMA-RT schemes, respectively. Furthermore, it can be observed that the gap between the AoIs achieved by the NOMA-NRT scheme (or NOMA-RT scheme) and the NOMA-NRT scheme (or NOMA-RT scheme) at a low SNR is larger than that at a high SNR.\nIn addition, as shown in Fig. 8 ###reference_###, when , NOMA-RT achieves lower average AoI compared to NOMA-NRT. By contrast, when , the curves for NOMA-RT and NOMA-NRT overlaps with each other. Thus, it can be concluded that re-transmission strategy is more necessary for scenarios with low packet arrival rates.\nIt can also be observed from Fig. 9 ###reference_### that the gap between the AoIs achieved by NOMA-NRT (or NOMA-RT) scheme and TDMA-NRT (or TDMA-RT) scheme increases with the increase of the number of users. The reason can be explained as the following two folds. First, CR-NOMA outperform its TDMA counterpart is mainly because of its shorter waiting time for consecutive transmissions. Second, as the number of users increases, the additional waiting time of TDMA compared to NOMA becomes longer, and hence enlarging the AoI gap.\n###figure_16### ###figure_17### ###figure_18### ###figure_19### Fig. 10 ###reference_### shows the impact of the number of users on average AoIs for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes, under different status updating packet arrival rates. As shown in the figure, the AoIs achieved by TDMA-NRT, TDMA-RT, NOMA-NRT, and NOMA-RT schemes increase with the number of users for a given arrival rate. In addition, the gap between the AoIs achieved by NOMA-NRT (or NOMA-RT) and TDMA-NRT (or TDMA-RT) increases with the number of users. Another interesting observation is that the gap between the AoIs achieved by a retransmission scheme and its corresponding non-retransmission scheme vanishes as the number of users increases. Moreover, as the arrival rate increases, NOMA-NRT scheme achieves almost the same AoI compared to the corresponding NOMA-RT scheme.\n###figure_20### Fig. 11 ###reference_### shows the impact of packet size on average AoIs achieved by TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes. As shown in the figure, the AoIs achieved by TDMA-NRT, TDMA-RT, NOMA-NRT, and NOMA-RT increase with the packet size, since larger packet size results in lower transmission reliability.\nInterestingly, the comparisons of the curves under different parameter settings behave different. On the one hand, for a low packet arrival rate, both NOMA-RT and NOMA-NRT schemes significantly outperform their TDMA counterparts when the packet size is low. However, NOMA-NRT scheme might not outperform or even be worse than TDMA-NRT scheme when the packet size is relatively large. The reason can be explained as follows. Consider an updating packet arrives at user just before user \u2019s slot, then user will transmit the packet as a secondary user in the -th slot. However, for a large packet size, it is highly possible that the transmission of a secondary user fails. As a consequence, the transmission opportunity is wasted and the packet is dropped, followed by a long waiting time for a new packet\u2019s arrival when the arrival rate is low.\nOn the other hand, for higher packet arrival rates, CR-NOMA schemes outperform their TDMA counterparts. However, as the packet size increases, the performance gain becomes less significant. This can also be explained by the fact that the transmission reliability of a secondary user in CR-NOMA decreases as the packet size increases.\n###figure_21### Fig. 12 ###reference_### shows the relationship between average AoI and average energy consumption (AEC) for the TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes. Note that AEC is obtained by the equation:\n, where denotes the number of time slots where one source transmits signals, and denotes the total number of frames.\nIt can be clearly observed from Fig. 12 ###reference_### that the AoIs decrease and the AECs increase with SNR, and the NOMA-NRT (NOMA-RT) scheme can achieve a smaller AoI than the TDMA-NRT (TDMA-RT) scheme but at the cost of a higher AEC. In addition, NOMA-RT scheme can achieve lower AoI compared to NOMA-NRT scheme, also at the cost of higher AEC. It is important to take the energy budget into consideration when minimizing AoI as an important future research direction."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Conclusions",
|
| 105 |
+
"text": "The application of CR-NOMA to reduce information timeliness of status updating systems has been investigated in this paper, where the randomness of the data generation process has been considered. The LCFS queuing strategy has also been adopted. Closed-form expressions for the average AoIs achieved by NOMA-NRT and NOMA-RT schemes have been obtained. Simulation results have been provided to verify the developed analysis and also demonstrate the superior performance of applying CR-NOMA to reduce AoI.\nNote that, fixed power allocation has been considered in this paper, considering power budget and designing practical power allocation schemes will be a very important research direction in future. Besides, in this paper, at most two users can transmit in a single time slot. For future work, it is important to study the schemes which can accommodate more users for ensuring the freshness of data in a single slot.\nMoreover, the considered TDMA has limitations to be applied in dense scenarios for reducing AoI, due to overhead, synchronization issues and resource allocation complexity.\nFor scenarios with dense sources, it is important to apply random access methods, such as grant free schemes, to ensure information freshness, which is left as an important future research direction.\nLast but not least, it can be envisioned that rate splitting multiple access (RSMA) [32 ###reference_b32###] has potential to further reduce AoI, by splitting the secondary user\u2019s signal into two independent sub-signals, which is left as an important future exploration direction."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 1",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix A Proof for Theorem",
|
| 113 |
+
"text": "It can be easily found that and are independent of each other, thus, we have"
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"section_id": "Appendix 2",
|
| 117 |
+
"parent_section_id": null,
|
| 118 |
+
"section_name": "Appendix B Proof for Theorem",
|
| 119 |
+
"text": "To obtain the average AoI achieved by NOMA-NRT, the first task is to evaluate .\nNote that, the transmission of the -th successfully delivered update might be finished at the end of either the -th or the -th time slot of a frame, yielding different distributions of . Thus, can be evaluated as follows:\nwhere and denote the events that the transmission of the -th successfully delivered update is finished at the end of the -th and -th time slot, respectively, and step (a) follows from the fact that, given (or ), and are independent of each other.\nIn the following, it will be shown that the calculation of (32 ###reference_###) can be significantly simplified. To this end, we first evaluate and .\nAs shown in Fig.2 ###reference_### (b), can be divided into two parts as:\n, where is the waiting time of the transmitted update since its generation, and is the transmission time.\nNote that the evaluation of should be taken under the condition that the following two events occur, say and , where denotes the event that there is at least one status update generated within the interval with duration before the start of the\ntransmission time slot, and denotes the event that the status update is finally successfully transmitted within the transmitting time slot. It is noteworthy that can be divided into\ntwo disjoint events, i.e., , where \nand denote the transmission is completed within the -th time slot and -th time slot of a frame, respectively. Then, and can be expressed as follows:\nIn the following, it will be shown how can be evaluated. First, it is necessary to characterize the distribution of given , which is given by:\ncan be calculated as follows:\nwhere step (a) follows from the fact that () implies the occurrence of , step (b) follows from the fact that is independent of the event that and , respectively, and Step (c) is obtained by noting that the status update generation follows a Poisson process.\nThen, can be easily obtained as follows:\nSimilarly, can be expressed as:\nInterestingly, it can be easily found that\nwhich straightforwardly results in\nThus, can be further expressed as:\nHence, can be simplified as follows:\nTherefore, the remainder of the proof is to evaluate and .\nTo obtain and , it is necessary to first evaluate the transmission success\nprobability of . The transmission success probability of if the -th time slot is used is\ngiven by as shown in (4 ###reference_###). In contrast, when transmits signal in the -th time slot, its transmission is likely to be interfered by , depending on whether transmits data in the -th time slot. Hence, the corresponding transmission success probability can be evaluated as follows:\nwhere .\nAs aforementioned, the distribution of is dependent on where the last successful update ends, or equivalently,\n or happens. Thus, can be expressed as:\nFor notational simplicity, denote by the probability of the event that there is status update to be transmitted before the -th time slot of a given frame and it is successfully delivered by using the -th time slot. Similarly, denote as the probability of the event that there is status update to be transmitted before the -th time slot of a given frame and it is successfully delivered by using the -th time slot.\nIt is straightforward to show that and can be expressed as follows:\nThen, and can be expressed as:\nTo evaluate , it is necessary to characterize the conditional distribution of given . Note that the value of can be expressed as , where is a random positive integer. It can be obtained that:\nThus, can be obtained as follows:\nSimilarly, can be obtained as follows:\nTherefore, with some algebraic manipulations, the expression for can be obtained as follows:\nSimilarly, the expressions of and can be obtained as follows:\nand\nTherefore, can be expressed as:\nwhich completes the proof."
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"section_id": "Appendix 3",
|
| 123 |
+
"parent_section_id": null,
|
| 124 |
+
"section_name": "Appendix C Proof for Theorem",
|
| 125 |
+
"text": "It is straightforward to show that and are independent of each other,which leads to the following:"
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"section_id": "Appendix 4",
|
| 129 |
+
"parent_section_id": null,
|
| 130 |
+
"section_name": "Appendix D Proof for Lemma",
|
| 131 |
+
"text": "###figure_22### ###figure_23### Note that, and and the distributions of and in the steady state are coupled, as shown in Fig. 14 ###reference_###. Thus, the key to evaluate is to establish equations for its relationships with\n and the distributions of and ,\nand then solve them.\nGiven , the distribution of in steady state can be obtained as follows.\nThe transition process of for consecutive frames can be modeled as a Markov chain as shown in Fig. 15 ###reference_###. and denote and , respectively. The corresponding transition matrix can be expressed as:\nwhere\nDenote the steady state probabilities for and by and , respectively. Then, we have:\nGiven and , can be expressed as:\nSimilarly, the transition process of for consecutive frames can also be modeled as a Markov process, with the following transition matrix:\nwhere\nDenoted the steady state probabilities for and by and , respectively, which leads to the following:\nGiven and , can be expressed as:\nBy combining (83 ###reference_###), (D ###reference_5###), (90 ###reference_###) and (D ###reference_7###), the expression for can be obtained, and the proof is complete."
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"section_id": "Appendix 5",
|
| 135 |
+
"parent_section_id": null,
|
| 136 |
+
"section_name": "Appendix E Proof for Theorem",
|
| 137 |
+
"text": "can be written as follows:\nwhere (or ) denotes the event that the transmission of the -th successfully delivered update is finished at the end of the -th (or )-th time slot. Note that step (a) follows from the fact that, given (or ), and are independent of each other.\n###figure_24### To obtain and , it is necessary to consider all possible states (from the receiver perspective) at the end of -th and -th slot of each frame, whose transitions can be modeled as a Markov chain, as shown in\nFig. 16 ###reference_###. In Fig. 16 ###reference_###, (or ) denotes the state that\na new status update packet arrives at the receiver successfully within the -th slot (or -th slot).\n (or ) denotes the state that there is no new status update data received by the receiver within the -th slot (or -th slot), due to the reason that there\u2019s no status data to be transmitted within the time slot. (or ) also denotes the state that there is no new status update data received by the receiver within the -th slot (or -th slot), due to the transmission failure.\nThe corresponding probability transition matrix can be expressed as shown in (93 ###reference_###) at the top of next page.\nDenote the steady state probability for by , .\nThe expression of can be obtained by solving the following steady state equation:\nParticularly, and can be expressed by (22 ###reference_###) and (23 ###reference_###), respectively.\nThen, according to the definitions of and , it can be easily obtained that:\nThe next task is to evaluate , where can be obtained similarly.\nAs shown in Fig.2 ###reference_### (d), can be divided into two parts as follows:\nwhere is the waiting time of the transmitted update from its generation to the start of its first transmission, and is the time duration of the transmitted update from the start of its first transmission to the end of its final transmission.\n###figure_25### Note that the evaluation of should be taken under the condition that the following two events occur, namely and , where denotes the event that there is at least one status update generated within the interval with duration before the start of the first transmission time slot, and denotes the event that the status update is finally successfully transmitted within the transmitting time slot. It is noteworthy that can be divided into\ntwo disjoint events, i.e., , where and denote the transmission is completed within the -th time slot and -th time slot of a frame, respectively. Thus, the expression of can be written as follows:\nBy following the similar steps from (35 ###reference_###) to (37 ###reference_###), the expression for can be obtained as follows:\nRewrite as , where is a random nonnegative integer, and can be expressed as follows:\nWhen k is an even number, we have:\nwhere , and when k is an odd number, we have:\nThen, the expression for can be obtained, which is given by:\nBy taking (100 ###reference_0###)-(102 ###reference_2###) into (99 ###reference_###), it can be obtained that:\nThus, the expression for can be obtained as:\nSimilarly, it can be obtained that:\nThe next step is to evaluate .\nAs shown in Fig. 17 ###reference_###, given , the state transition process from the end instant of the ()-th successful transmission (state ) to the end instant of the -th successful transmission (state ) can be modeled as a Markov process with an absorbing wall, where , , , , and are the transient states, and is the absorbing state.\nState (or ) denotes the state that there is no one status update packet to be transmitted within the -th (or -th) slot, and state (or ) denotes the state that there is status update packet to be transmitted within the -th (or )-th slot. The probability transition matrix for the absorbing Markov chain is given by:\nwhere\nIt can be observed that is the total time elapsed from state to state , which\ncan be expressed as , where is the number of steps from to . Hence, it can be obtained that:\nBy following the similar steps from (66 ###reference_###) to (67 ###reference_###), the expression for can be obtained, as shown in the following:\nBy using the same method for , the expression for can be obtained as follows:\nBy taking (95 ###reference_###), (104 ###reference_4###), (105 ###reference_5###), (115 ###reference_5###), and (116 ###reference_6###) into (E ###reference_8###), the expression for can be straightforwardly obtained:\nFurthermore, the expression for can be obtained as follows:\nBy following the similar steps from (70 ###reference_###) to (76 ###reference_###), the expressions for and can be obtained, which are given by:\nwhere , , , are shown in (106 ###reference_6###), (107 ###reference_7###) and (108 ###reference_8###), and\nwhere , , , are shown in (109 ###reference_9###), (110 ###reference_0###) and (111 ###reference_1###).\nThus, the expression for can be obtained as follows:\nThe proof is complete."
|
| 138 |
+
}
|
| 139 |
+
],
|
| 140 |
+
"tables": {},
|
| 141 |
+
"image_paths": {
|
| 142 |
+
"1": {
|
| 143 |
+
"figure_path": "2311.02691v2_figure_1.png",
|
| 144 |
+
"caption": "Figure 1: Illustration of the AoI of a status updating process.",
|
| 145 |
+
"url": "http://arxiv.org/html/2311.02691v2/x1.png"
|
| 146 |
+
},
|
| 147 |
+
"2": {
|
| 148 |
+
"figure_path": "2311.02691v2_figure_2.png",
|
| 149 |
+
"caption": "Figure 2: Illustration of the status updating process and the corresponding AoI evolution for TDMA-NRT,NOMA-NRT, TDMA-RT and NOMA-RT. It can be seen that by using NOMA and retransmission mechanism, more transmission opportunities can be provided, which can significantly reduce the instantaneous AoI.",
|
| 150 |
+
"url": "http://arxiv.org/html/2311.02691v2/x2.png"
|
| 151 |
+
},
|
| 152 |
+
"3(a)": {
|
| 153 |
+
"figure_path": "2311.02691v2_figure_3(a).png",
|
| 154 |
+
"caption": "(a) TDMA-NRT and NOMA-NRT\nFigure 3: Average AoI achieved by TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT.\n\u03bbm=\u03bbm\u2032=0.1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20320.1\\lambda_{m}=\\lambda_{m^{\\prime}}=0.1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.1, M=8\ud835\udc408M=8italic_M = 8, T=3\ud835\udc473T=3italic_T = 3.",
|
| 155 |
+
"url": "http://arxiv.org/html/2311.02691v2/x3.png"
|
| 156 |
+
},
|
| 157 |
+
"3(b)": {
|
| 158 |
+
"figure_path": "2311.02691v2_figure_3(b).png",
|
| 159 |
+
"caption": "(b) TDMA-RT and NOMA-RT\nFigure 3: Average AoI achieved by TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT.\n\u03bbm=\u03bbm\u2032=0.1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20320.1\\lambda_{m}=\\lambda_{m^{\\prime}}=0.1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.1, M=8\ud835\udc408M=8italic_M = 8, T=3\ud835\udc473T=3italic_T = 3.",
|
| 160 |
+
"url": "http://arxiv.org/html/2311.02691v2/x4.png"
|
| 161 |
+
},
|
| 162 |
+
"4(a)": {
|
| 163 |
+
"figure_path": "2311.02691v2_figure_4(a).png",
|
| 164 |
+
"caption": "(a) TDMA-NRT and NOMA-NRT\nFigure 4: Impact of packet arrival rates on AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. N=1\ud835\udc411N=1italic_N = 1 bit, T=1\ud835\udc471T=1italic_T = 1, \u03bbm=\u03bbm\u2032subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u2032\\lambda_{m}=\\lambda_{m^{\\prime}}italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT.",
|
| 165 |
+
"url": "http://arxiv.org/html/2311.02691v2/x5.png"
|
| 166 |
+
},
|
| 167 |
+
"4(b)": {
|
| 168 |
+
"figure_path": "2311.02691v2_figure_4(b).png",
|
| 169 |
+
"caption": "(b) TDMA-RT and NOMA-RT\nFigure 4: Impact of packet arrival rates on AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. N=1\ud835\udc411N=1italic_N = 1 bit, T=1\ud835\udc471T=1italic_T = 1, \u03bbm=\u03bbm\u2032subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u2032\\lambda_{m}=\\lambda_{m^{\\prime}}italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT.",
|
| 170 |
+
"url": "http://arxiv.org/html/2311.02691v2/x6.png"
|
| 171 |
+
},
|
| 172 |
+
"5(a)": {
|
| 173 |
+
"figure_path": "2311.02691v2_figure_5(a).png",
|
| 174 |
+
"caption": "(a) TDMA-NRT and NOMA-NRT\nFigure 5: Impact of the duration of a time slot on AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. N=2\ud835\udc412N=2italic_N = 2 bits, \u03bbm=\u03bbm\u2032=0.1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20320.1\\lambda_{m}=\\lambda_{m^{\\prime}}=0.1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.1.",
|
| 175 |
+
"url": "http://arxiv.org/html/2311.02691v2/x7.png"
|
| 176 |
+
},
|
| 177 |
+
"5(b)": {
|
| 178 |
+
"figure_path": "2311.02691v2_figure_5(b).png",
|
| 179 |
+
"caption": "(b) TDMA-RT and NOMA-RT\nFigure 5: Impact of the duration of a time slot on AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. N=2\ud835\udc412N=2italic_N = 2 bits, \u03bbm=\u03bbm\u2032=0.1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20320.1\\lambda_{m}=\\lambda_{m^{\\prime}}=0.1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.1.",
|
| 180 |
+
"url": "http://arxiv.org/html/2311.02691v2/x8.png"
|
| 181 |
+
},
|
| 182 |
+
"6(a)": {
|
| 183 |
+
"figure_path": "2311.02691v2_figure_6(a).png",
|
| 184 |
+
"caption": "(a) TDMA-NRT and NOMA-NRT\nFigure 6: Impact of the duration of a time slot on AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. M=8\ud835\udc408M=8italic_M = 8, \u03bbm=\u03bbm\u2032=0.1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20320.1\\lambda_{m}=\\lambda_{m^{\\prime}}=0.1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.1.",
|
| 185 |
+
"url": "http://arxiv.org/html/2311.02691v2/x9.png"
|
| 186 |
+
},
|
| 187 |
+
"6(b)": {
|
| 188 |
+
"figure_path": "2311.02691v2_figure_6(b).png",
|
| 189 |
+
"caption": "(b) TDMA-RT and NOMA-RT\nFigure 6: Impact of the duration of a time slot on AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. M=8\ud835\udc408M=8italic_M = 8, \u03bbm=\u03bbm\u2032=0.1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20320.1\\lambda_{m}=\\lambda_{m^{\\prime}}=0.1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.1.",
|
| 190 |
+
"url": "http://arxiv.org/html/2311.02691v2/x10.png"
|
| 191 |
+
},
|
| 192 |
+
"7(a)": {
|
| 193 |
+
"figure_path": "2311.02691v2_figure_7(a).png",
|
| 194 |
+
"caption": "(a) TDMA-NRT and NOMA-NRT\nFigure 7: Impact of the duration of a time slot on AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. N=2\ud835\udc412N=2italic_N = 2 bits, M=8\ud835\udc408M=8italic_M = 8.",
|
| 195 |
+
"url": "http://arxiv.org/html/2311.02691v2/x11.png"
|
| 196 |
+
},
|
| 197 |
+
"7(b)": {
|
| 198 |
+
"figure_path": "2311.02691v2_figure_7(b).png",
|
| 199 |
+
"caption": "(b) TDMA-RT and NOMA-RT\nFigure 7: Impact of the duration of a time slot on AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. N=2\ud835\udc412N=2italic_N = 2 bits, M=8\ud835\udc408M=8italic_M = 8.",
|
| 200 |
+
"url": "http://arxiv.org/html/2311.02691v2/x12.png"
|
| 201 |
+
},
|
| 202 |
+
"8": {
|
| 203 |
+
"figure_path": "2311.02691v2_figure_8.png",
|
| 204 |
+
"caption": "Figure 8: Comparisons among TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT in terms of AoI. M=8\ud835\udc408M=8italic_M = 8, N=1\ud835\udc411N=1italic_N = 1 bit, T=1\ud835\udc471T=1italic_T = 1, \u03bbm=\u03bbm\u2032subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u2032\\lambda_{m}=\\lambda_{m^{\\prime}}italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT.",
|
| 205 |
+
"url": "http://arxiv.org/html/2311.02691v2/x13.png"
|
| 206 |
+
},
|
| 207 |
+
"9(a)": {
|
| 208 |
+
"figure_path": "2311.02691v2_figure_9(a).png",
|
| 209 |
+
"caption": "(a) T=1\ud835\udc471T=1italic_T = 1\nFigure 9: Comparisons among TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT in terms of AoI. N=1\ud835\udc411N=1italic_N = 1 bit, \u03bbm=\u03bbm\u2032=1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20321\\lambda_{m}=\\lambda_{m^{\\prime}}=1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 1.",
|
| 210 |
+
"url": "http://arxiv.org/html/2311.02691v2/x14.png"
|
| 211 |
+
},
|
| 212 |
+
"9(b)": {
|
| 213 |
+
"figure_path": "2311.02691v2_figure_9(b).png",
|
| 214 |
+
"caption": "(b) T=2\ud835\udc472T=2italic_T = 2\nFigure 9: Comparisons among TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT in terms of AoI. N=1\ud835\udc411N=1italic_N = 1 bit, \u03bbm=\u03bbm\u2032=1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20321\\lambda_{m}=\\lambda_{m^{\\prime}}=1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 1.",
|
| 215 |
+
"url": "http://arxiv.org/html/2311.02691v2/x15.png"
|
| 216 |
+
},
|
| 217 |
+
"10(a)": {
|
| 218 |
+
"figure_path": "2311.02691v2_figure_10(a).png",
|
| 219 |
+
"caption": "(a) \u03bbm=0.5subscript\ud835\udf06\ud835\udc5a0.5\\lambda_{m}=0.5italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = 0.5\nFigure 10: Impact of the number of users on average AoIs for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes under different status updating packet arrival rates. N=1\ud835\udc411N=1italic_N = 1 bit, T=1\ud835\udc471T=1italic_T = 1, \u03bbm=\u03bbm\u2032subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u2032\\lambda_{m}=\\lambda_{m^{\\prime}}italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT.",
|
| 220 |
+
"url": "http://arxiv.org/html/2311.02691v2/x16.png"
|
| 221 |
+
},
|
| 222 |
+
"10(b)": {
|
| 223 |
+
"figure_path": "2311.02691v2_figure_10(b).png",
|
| 224 |
+
"caption": "(b) \u03bbm=1subscript\ud835\udf06\ud835\udc5a1\\lambda_{m}=1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = 1\nFigure 10: Impact of the number of users on average AoIs for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes under different status updating packet arrival rates. N=1\ud835\udc411N=1italic_N = 1 bit, T=1\ud835\udc471T=1italic_T = 1, \u03bbm=\u03bbm\u2032subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u2032\\lambda_{m}=\\lambda_{m^{\\prime}}italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT.",
|
| 225 |
+
"url": "http://arxiv.org/html/2311.02691v2/x17.png"
|
| 226 |
+
},
|
| 227 |
+
"10(c)": {
|
| 228 |
+
"figure_path": "2311.02691v2_figure_10(c).png",
|
| 229 |
+
"caption": "(c) \u03bbm=1.5subscript\ud835\udf06\ud835\udc5a1.5\\lambda_{m}=1.5italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = 1.5\nFigure 10: Impact of the number of users on average AoIs for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes under different status updating packet arrival rates. N=1\ud835\udc411N=1italic_N = 1 bit, T=1\ud835\udc471T=1italic_T = 1, \u03bbm=\u03bbm\u2032subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u2032\\lambda_{m}=\\lambda_{m^{\\prime}}italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT.",
|
| 230 |
+
"url": "http://arxiv.org/html/2311.02691v2/x18.png"
|
| 231 |
+
},
|
| 232 |
+
"10(d)": {
|
| 233 |
+
"figure_path": "2311.02691v2_figure_10(d).png",
|
| 234 |
+
"caption": "(d) \u03bbm=3subscript\ud835\udf06\ud835\udc5a3\\lambda_{m}=3italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = 3\nFigure 10: Impact of the number of users on average AoIs for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT schemes under different status updating packet arrival rates. N=1\ud835\udc411N=1italic_N = 1 bit, T=1\ud835\udc471T=1italic_T = 1, \u03bbm=\u03bbm\u2032subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u2032\\lambda_{m}=\\lambda_{m^{\\prime}}italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT.",
|
| 235 |
+
"url": "http://arxiv.org/html/2311.02691v2/x19.png"
|
| 236 |
+
},
|
| 237 |
+
"11": {
|
| 238 |
+
"figure_path": "2311.02691v2_figure_11.png",
|
| 239 |
+
"caption": "Figure 11: Impact of packet size N\ud835\udc41Nitalic_N on average AoI for TDMA-NRT, TDMA-RT, NOMA-NRT and NOMA-RT. M=8\ud835\udc408M=8italic_M = 8, T=1\ud835\udc471T=1italic_T = 1, SNR=20absent20=20= 20dB.",
|
| 240 |
+
"url": "http://arxiv.org/html/2311.02691v2/x20.png"
|
| 241 |
+
},
|
| 242 |
+
"12": {
|
| 243 |
+
"figure_path": "2311.02691v2_figure_12.png",
|
| 244 |
+
"caption": "Figure 12: Relationship between AoI and AEC. M=8\ud835\udc408M=8italic_M = 8, T=1\ud835\udc471T=1italic_T = 1, N=1\ud835\udc411N=1italic_N = 1 bit, \u03bbm=\u03bbm\u2032=0.1subscript\ud835\udf06\ud835\udc5asubscript\ud835\udf06superscript\ud835\udc5a\u20320.1\\lambda_{m}=\\lambda_{m^{\\prime}}=0.1italic_\u03bb start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT = italic_\u03bb start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = 0.1.",
|
| 245 |
+
"url": "http://arxiv.org/html/2311.02691v2/x21.png"
|
| 246 |
+
},
|
| 247 |
+
"13": {
|
| 248 |
+
"figure_path": "2311.02691v2_figure_13.png",
|
| 249 |
+
"caption": "Figure 13: Illustration of the status updating process for TDMA-RT scheme by a Markov chain. The expressions along side the arrow lines denote the corresponding duration spent by the state transition.",
|
| 250 |
+
"url": "http://arxiv.org/html/2311.02691v2/x22.png"
|
| 251 |
+
},
|
| 252 |
+
"14": {
|
| 253 |
+
"figure_path": "2311.02691v2_figure_14.png",
|
| 254 |
+
"caption": "Figure 14: Relationships of \u03b4i,m\u2032m\u2032superscriptsubscript\ud835\udeff\ud835\udc56superscript\ud835\udc5a\u2032superscript\ud835\udc5a\u2032\\delta_{i,m^{\\prime}}^{m^{\\prime}}italic_\u03b4 start_POSTSUBSCRIPT italic_i , italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT, \u03b4i,mmsuperscriptsubscript\ud835\udeff\ud835\udc56\ud835\udc5a\ud835\udc5a\\delta_{i,m}^{m}italic_\u03b4 start_POSTSUBSCRIPT italic_i , italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT, Pm\u2062m\u2032subscript\ud835\udc43\ud835\udc5asuperscript\ud835\udc5a\u2032P_{mm^{\\prime}}italic_P start_POSTSUBSCRIPT italic_m italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT and Pm\u2032\u2062msubscript\ud835\udc43superscript\ud835\udc5a\u2032\ud835\udc5aP_{m^{\\prime}m}italic_P start_POSTSUBSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT italic_m end_POSTSUBSCRIPT in NOMA-RT scheme.",
|
| 255 |
+
"url": "http://arxiv.org/html/2311.02691v2/x23.png"
|
| 256 |
+
},
|
| 257 |
+
"15": {
|
| 258 |
+
"figure_path": "2311.02691v2_figure_15.png",
|
| 259 |
+
"caption": "Figure 15: State transition diagram for \u03b4i,m\u2032m\u2032superscriptsubscript\ud835\udeff\ud835\udc56superscript\ud835\udc5a\u2032superscript\ud835\udc5a\u2032\\delta_{i,m^{\\prime}}^{m^{\\prime}}italic_\u03b4 start_POSTSUBSCRIPT italic_i , italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT in consecutive frames in NOMA-RT scheme.",
|
| 260 |
+
"url": "http://arxiv.org/html/2311.02691v2/x24.png"
|
| 261 |
+
},
|
| 262 |
+
"16": {
|
| 263 |
+
"figure_path": "2311.02691v2_figure_16.png",
|
| 264 |
+
"caption": "Figure 16: State transition diagram for the states at the end of the m\ud835\udc5amitalic_m-th and m\u2032superscript\ud835\udc5a\u2032m^{\\prime}italic_m start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT-th slot of each frame in NOMA-RT scheme.",
|
| 265 |
+
"url": "http://arxiv.org/html/2311.02691v2/x25.png"
|
| 266 |
+
},
|
| 267 |
+
"17": {
|
| 268 |
+
"figure_path": "2311.02691v2_figure_17.png",
|
| 269 |
+
"caption": "Figure 17: Illustration of the state transition process from the end instant of the (j\u22121\ud835\udc571j-1italic_j - 1)-th successful transmission to the end instant of the j\ud835\udc57jitalic_j-th successful transmission. The expressions along side the arrow lines denote the corresponding duration spent by the state transition.",
|
| 270 |
+
"url": "http://arxiv.org/html/2311.02691v2/x26.png"
|
| 271 |
+
}
|
| 272 |
+
},
|
| 273 |
+
"validation": true,
|
| 274 |
+
"references": [],
|
| 275 |
+
"url": "http://arxiv.org/html/2311.02691v2"
|
| 276 |
+
}
|
20241217/2311.07889v2.json
ADDED
|
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Satisfying the Restricted Isometry Property with the Optimal Number of Rows and Slightly Less Randomness",
|
| 3 |
+
"abstract": "A matrix satisfies the restricted isometry property if is approximately equal to for all -sparse vectors .\nWe give a construction of RIP matrices with the optimal rows using bits of randomness.\nThe main technical ingredient is an extension of the Hanson-Wright inequality to -biased distributions.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "A matrix is said to satisfy the -restricted isometry property if for every -sparse vector , one has\nwhere denotes the norm.\nThis notion, introduced by Cand\u00e8s and Tao [CT05 ###reference_bx11###], has many applications especially in the field of compressed sensing [Can08 ###reference_bx9###].\nA major goal is to construct such matrices so that the number of rows is as small as possible compared to and .\nIn this note, we will mostly ignore the dependence on .\nIt is known that rows are necessary for the restricted isometry property to hold [FPRU10 ###reference_bx15###].\nIf the construction is required to be deterministic there are many constructions with rows [Kas75 ###reference_bx19###, AGHP92 ###reference_bx1###, DeV07 ###reference_bx14###, NT11 ###reference_bx25###].\nIn a breakthrough result due to Bourgain, Dilworth, Ford, Konyagin, and Kutzarova [BDF+11 ###reference_bx3###], a construction with rows was obtained, for some very small constant .\nThis was later improved by Mixon [Mix15 ###reference_bx22###], and Bandeira, Mixon, and Moreira [BMM17 ###reference_bx7###] conditioned on a number-theoretic conjecture.\nIt is known that significant improvements would imply explicit constructions of Ramsey graphs [Gam20 ###reference_bx16###].\nOn the other hand, there exist randomized constructions that do achieve the optimal number of rows.\nIn particular, if each entry is an independent Gaussian or Bernoulli random variable, then the restricted isometry property holds for [CT06 ###reference_bx12###, BDDW08 ###reference_bx2###, MPTJ08 ###reference_bx23###].\nRandomized constructions can be evaluated by the number of bits of randomness they use.\nWhen all entries are independent, then bits of randomness are needed.\nThus, towards the goal of deterministic constructions of matrices that satisfy the restricted isometry property, one can ask if there exist constructions that use fewer bits of randomness.\nThere is a long line of work that considers a construction where each row of is a random row of a Fourier or Walsh-Hadamard matrix [CT06 ###reference_bx12###, RV08 ###reference_bx26###, CGV13 ###reference_bx10###, Bou14 ###reference_bx8###, HR17 ###reference_bx17###].\nSuch constructions use bits of randomness.\nThe analysis due to Haviv and Regev [HR17 ###reference_bx17###], and later improved by [BDJR21 ###reference_bx4###], obtain the best-known bound on the number of rows .\nHowever, it is known that when the rows are obtained from a Walsh-Hadamard matrix, the number of rows must be [BLL+23 ###reference_bx6###].\nThus, such constructions can not achieve the optimal number of rows that constructions with independent entries can.\nAnother choice of construction allows for the entries of to come from a -wise independent distribution.\nCombining the analysis of [CW09 ###reference_bx13###] with [BDDW08 ###reference_bx2###], a slight modification to the analysis for independent entries holds for -wise independent entries.\nIn particular, one can let be the optimal .\nStandard constructions of -wise independent random variables when require bits of randomness.\nThis was slightly improved in [KN10 ###reference_bx20###] to a construction that requires only bits of randomness.\nFinally, a third line of work uses pseudorandom properties of the Legendre symbol.\nThis approach uses bits of randomness, but requires rows [BFMM16 ###reference_bx5###].\nIn this note, we use almost -wise independent distributions when to show that that random bits is enough to construct a matrix that satisfies the restricted isometry property.\nThis is the least amount of randomness used among all randomized constructions with the optimal number of rows listed above.\nHowever, when for some constant for example, this recovers the result in [KN10 ###reference_bx20###] as in this case.\nThe pseudorandom properties of the almost -wise independent distributions that we use are similar to the pseudorandom properties of the Legendre symbol used in [BFMM16 ###reference_bx5###].\nCompared to [KN10 ###reference_bx20###] and [BFMM16 ###reference_bx5###], our proof is arguably simpler.\nHowever, it is conjectured that the Legendre symbol can be used to construct deterministic RIP matrices, something that is not true for the techniques in this note.\nThere exists a distribution of matrices for that can be sampled efficiently using bits of randomness such that a sample from this distribution satisfies the -restricted isometry property with high probability.\nThe proof of this theorem follows the same structure as [CW09 ###reference_bx13###] with [BDDW08 ###reference_bx2###] when the entries of come from a truly -wise independent distribution.\nIn particular, one focuses on the more general question of constructing a matrix that is a Johnson-Lindenstrauss projection.\nThat is, one shows that for any fixed unit vector\nfor some constant .\nIt was shown in [BDDW08 ###reference_bx2###] that if Eq. (1 ###reference_###) holds for when , then the random matrix satisfies the restricted isometry property with high probability.\nIn this note, we only consider the case of when is -sparse.\nThat is, when is not -sparse, and the entries of come from an almost -wise independent distribution, the number of random bits required for our generalization of the Hanson-Wright inequality to hold is too large to improve upon the result in [CW09 ###reference_bx13###].\nHowever, for the purposes of constructing a matrix that satisfies the restricted isometry property, it is enough to consider only -sparse .\nThis is because for the restricted isometry property to hold, it is enough to show that for -sparse vectors ."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Preliminaries",
|
| 15 |
+
"text": "If is a vector, we define the norm\nIf is a matrix, we define the norms\nThe construction of matrices is derived from -biased distributions, which we define below.\nA distribution is -biased -wise independent if when is sampled uniformly from , for all such that ,\nThere exist -biased -wise independent distributions such that , due to [AGHP92 ###reference_bx1###, NN93 ###reference_bx24###].\nThe main ingredient of the proof of Theorem 1.1 ###reference_theorem1### is a generalization of the Hanson-Wright inequality.\nThis inequality can often be used to obtain concentration inequalities for quadratic forms using Markov\u2019s inequality, as will be done here.\nWe state the original below [HW71 ###reference_bx18###] (see Theorem 3.1 in [KN14 ###reference_bx21###]).\nThere exists a constant such that the following holds.\nLet be independent random variables uniform over .\nThen for any symmetric and integer a power of ,"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Proof of the Main Theorem",
|
| 21 |
+
"text": "The main technical ingredient of this note is a generalization of the Hanson-Wright inequality [HW71 ###reference_bx18###] for -biased distributions which we state and prove below.\nThere exists a constant such that the following holds.\nLet be a sample from an -biased -wise independent distribution from for any integer a power of .\nThen for any symmetric ,\nLet .\nThen,\nLet be the set of sequences in such that each appears in an even number of pairs.\nThen Eq. (2 ###reference_###) can be rewritten as\nThe equality follows from the definition of and the fact that for all .\nThe inequality follows from the fact that when the are independent, the first sum in Eq. (3 ###reference_###) evaluates to , and thus, Theorem 2.2 ###reference_theorem2### can be used to bound the second sum.\nBecause the come from an -biased -wise independent distribution,\nwhen .\nThus, the left-hand side of Eq. (3 ###reference_###) is bounded above by\nas desired.\n\u220e\nWe now prove Theorem 1.1 ###reference_theorem1### using the same main ideas as in [KN10 ###reference_bx20###, Theorem 6].\nWe let come from a -biased -wise independent distribution for , normalized so that all entries are from the set .\nThere exists a construction using the desired number of random bits, , due to [AGHP92 ###reference_bx1###, NN93 ###reference_bx24###].\nBy Theorem 5.2 in [BDDW08 ###reference_bx2###], it is enough to show that Eq. (1 ###reference_###) holds when is a -sparse vector such that , and .\nFor every -sparse vector , let be a block-diagonal matrix with blocks, where each block is equal to .\nLet contain each row of stacked in a vector.\nNote that contains at most non-zero entries as is -sparse, and thus .\nBy Lemma 3.1 ###reference_theorem1###, for some constants and ,\nBy direct computation, , , and .\nThus, the bound becomes\nBy Markov\u2019s inequality, one has\nfor some constant and , where the last inequality follows by noting that .\n\u220e"
|
| 22 |
+
}
|
| 23 |
+
],
|
| 24 |
+
"appendix": [],
|
| 25 |
+
"tables": {},
|
| 26 |
+
"image_paths": {},
|
| 27 |
+
"validation": true,
|
| 28 |
+
"references": [
|
| 29 |
+
{
|
| 30 |
+
"1": {
|
| 31 |
+
"title": "Simple constructions of almost -wise independent random\nvariables.",
|
| 32 |
+
"author": "N. Alon, O. Goldreich, J. H\u00e5stad, and R. Peralta.",
|
| 33 |
+
"venue": "Random Structures Algorithms, 3(3):289\u2013304, 1992.",
|
| 34 |
+
"url": null
|
| 35 |
+
}
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"2": {
|
| 39 |
+
"title": "A simple proof of the restricted isometry property for random\nmatrices.",
|
| 40 |
+
"author": "R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin.",
|
| 41 |
+
"venue": "Constr. Approx., 28(3):253\u2013263, 2008.",
|
| 42 |
+
"url": null
|
| 43 |
+
}
|
| 44 |
+
},
|
| 45 |
+
{
|
| 46 |
+
"3": {
|
| 47 |
+
"title": "Explicit constructions of RIP matrices and related problems.",
|
| 48 |
+
"author": "J. Bourgain, S. Dilworth, K. Ford, S. Konyagin, and D. Kutzarova.",
|
| 49 |
+
"venue": "Duke Math. J., 159(1):145\u2013185, 2011.",
|
| 50 |
+
"url": null
|
| 51 |
+
}
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"4": {
|
| 55 |
+
"title": "Sparse recovery in bounded Riesz systems with applications to\nnumerical methods for PDEs.",
|
| 56 |
+
"author": "S. Brugiapaglia, S. Dirksen, H. C. Jung, and H. Rauhut.",
|
| 57 |
+
"venue": "Appl. Comput. Harmon. Anal., 53:231\u2013269, 2021.",
|
| 58 |
+
"url": null
|
| 59 |
+
}
|
| 60 |
+
},
|
| 61 |
+
{
|
| 62 |
+
"5": {
|
| 63 |
+
"title": "Derandomizing restricted isometries via the Legendre symbol.",
|
| 64 |
+
"author": "A. S. Bandeira, M. Fickus, D. G. Mixon, and J. Moreira.",
|
| 65 |
+
"venue": "Constr. Approx., 43(3):409\u2013424, 2016.",
|
| 66 |
+
"url": null
|
| 67 |
+
}
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"6": {
|
| 71 |
+
"title": "An improved lower bound for sparse reconstruction from subsampled\nWalsh matrices.",
|
| 72 |
+
"author": "J. Blasiok, P. Lopatto, K. Luh, J. Marcinek, and S. Rao.",
|
| 73 |
+
"venue": "Discrete Anal., pages Paper No. 3, 9, 2023.",
|
| 74 |
+
"url": null
|
| 75 |
+
}
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"7": {
|
| 79 |
+
"title": "A conditional construction of restricted isometries.",
|
| 80 |
+
"author": "A. S. Bandeira, D. G. Mixon, and J. Moreira.",
|
| 81 |
+
"venue": "Int. Math. Res. Not. IMRN, (2):372\u2013381, 2017.",
|
| 82 |
+
"url": null
|
| 83 |
+
}
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"8": {
|
| 87 |
+
"title": "An improved estimate in the restricted isometry problem.",
|
| 88 |
+
"author": "J. Bourgain.",
|
| 89 |
+
"venue": "In Geometric aspects of functional analysis, volume 2116 of\nLecture Notes in Math., pages 65\u201370. Springer, Cham, 2014.",
|
| 90 |
+
"url": null
|
| 91 |
+
}
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"9": {
|
| 95 |
+
"title": "The restricted isometry property and its implications for compressed\nsensing.",
|
| 96 |
+
"author": "E. J. Cand\u00e8s.",
|
| 97 |
+
"venue": "C. R. Math. Acad. Sci. Paris, 346(9-10):589\u2013592, 2008.",
|
| 98 |
+
"url": null
|
| 99 |
+
}
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"10": {
|
| 103 |
+
"title": "Restricted isometry of Fourier matrices and list decodability of\nrandom linear codes.",
|
| 104 |
+
"author": "M. Cheraghchi, V. Guruswami, and A. Velingker.",
|
| 105 |
+
"venue": "SIAM J. Comput., 42(5):1888\u20131914, 2013.",
|
| 106 |
+
"url": null
|
| 107 |
+
}
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"11": {
|
| 111 |
+
"title": "Decoding by linear programming.",
|
| 112 |
+
"author": "E. J. Candes and T. Tao.",
|
| 113 |
+
"venue": "IEEE Trans. Inform. Theory, 51(12):4203\u20134215, 2005.",
|
| 114 |
+
"url": null
|
| 115 |
+
}
|
| 116 |
+
},
|
| 117 |
+
{
|
| 118 |
+
"12": {
|
| 119 |
+
"title": "Near-optimal signal recovery from random projections: universal\nencoding strategies?",
|
| 120 |
+
"author": "E. J. Candes and T. Tao.",
|
| 121 |
+
"venue": "IEEE Trans. Inform. Theory, 52(12):5406\u20135425, 2006.",
|
| 122 |
+
"url": null
|
| 123 |
+
}
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"13": {
|
| 127 |
+
"title": "Numerical linear algebra in the streaming model.",
|
| 128 |
+
"author": "K. L. Clarkson and D. P. Woodruff.",
|
| 129 |
+
"venue": "In M. Mitzenmacher, editor, Proceedings of the 41st Annual\nACM Symposium on Theory of Computing, STOC 2009, Bethesda, MD, USA, May\n31 - June 2, 2009, pages 205\u2013214. ACM, 2009.",
|
| 130 |
+
"url": null
|
| 131 |
+
}
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"14": {
|
| 135 |
+
"title": "Deterministic constructions of compressed sensing matrices.",
|
| 136 |
+
"author": "R. A. DeVore.",
|
| 137 |
+
"venue": "J. Complexity, 23(4-6):918\u2013925, 2007.",
|
| 138 |
+
"url": null
|
| 139 |
+
}
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"15": {
|
| 143 |
+
"title": "The Gelfand widths of -balls for .",
|
| 144 |
+
"author": "S. Foucart, A. Pajor, H. Rauhut, and T. Ullrich.",
|
| 145 |
+
"venue": "J. Complexity, 26(6):629\u2013640, 2010.",
|
| 146 |
+
"url": null
|
| 147 |
+
}
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"16": {
|
| 151 |
+
"title": "Explicit construction of RIP matrices is Ramsey-hard.",
|
| 152 |
+
"author": "D. Gamarnik.",
|
| 153 |
+
"venue": "Comm. Pure Appl. Math., 73(9):2043\u20132048, 2020.",
|
| 154 |
+
"url": null
|
| 155 |
+
}
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"17": {
|
| 159 |
+
"title": "The restricted isometry property of subsampled Fourier matrices.",
|
| 160 |
+
"author": "I. Haviv and O. Regev.",
|
| 161 |
+
"venue": "In Geometric aspects of functional analysis, volume 2169 of\nLecture Notes in Math., pages 163\u2013179. Springer, Cham, 2017.",
|
| 162 |
+
"url": null
|
| 163 |
+
}
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"18": {
|
| 167 |
+
"title": "A bound on tail probabilities for quadratic forms in independent\nrandom variables.",
|
| 168 |
+
"author": "D. L. Hanson and F. T. Wright.",
|
| 169 |
+
"venue": "Ann. Math. Statist., 42:1079\u20131083, 1971.",
|
| 170 |
+
"url": null
|
| 171 |
+
}
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"19": {
|
| 175 |
+
"title": "The diameters of octahedra.",
|
| 176 |
+
"author": "B. S. Kashin.",
|
| 177 |
+
"venue": "Uspehi Mat. Nauk, 30(4(184)):251\u2013252, 1975.",
|
| 178 |
+
"url": null
|
| 179 |
+
}
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"20": {
|
| 183 |
+
"title": "A derandomized sparse Johnson-Lindenstrauss transform.",
|
| 184 |
+
"author": "D. M. Kane and J. Nelson.",
|
| 185 |
+
"venue": "arXiv:1006.3585, 2010.",
|
| 186 |
+
"url": null
|
| 187 |
+
}
|
| 188 |
+
},
|
| 189 |
+
{
|
| 190 |
+
"21": {
|
| 191 |
+
"title": "Sparser Johnson-Lindenstrauss transforms.",
|
| 192 |
+
"author": "D. M. Kane and J. Nelson.",
|
| 193 |
+
"venue": "J. ACM, 61(1):Art. 4, 23, 2014.",
|
| 194 |
+
"url": null
|
| 195 |
+
}
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"22": {
|
| 199 |
+
"title": "Explicit matrices with the restricted isometry property: breaking the\nsquare-root bottleneck.",
|
| 200 |
+
"author": "D. G. Mixon.",
|
| 201 |
+
"venue": "In Compressed sensing and its applications, Appl. Numer.\nHarmon. Anal., pages 389\u2013417. Birkh\u00e4user/Springer, Cham, 2015.",
|
| 202 |
+
"url": null
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"23": {
|
| 207 |
+
"title": "Uniform uncertainty principle for Bernoulli and subgaussian\nensembles.",
|
| 208 |
+
"author": "S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann.",
|
| 209 |
+
"venue": "Constr. Approx., 28(3):277\u2013289, 2008.",
|
| 210 |
+
"url": null
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"24": {
|
| 215 |
+
"title": "Small-bias probability spaces: efficient constructions and\napplications.",
|
| 216 |
+
"author": "J. Naor and M. Naor.",
|
| 217 |
+
"venue": "SIAM J. Comput., 22(4):838\u2013856, 1993.",
|
| 218 |
+
"url": null
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"25": {
|
| 223 |
+
"title": "On the size of incoherent systems.",
|
| 224 |
+
"author": "J. L. Nelson and V. N. Temlyakov.",
|
| 225 |
+
"venue": "J. Approx. Theory, 163(9):1238\u20131245, 2011.",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"26": {
|
| 231 |
+
"title": "On sparse reconstruction from Fourier and Gaussian measurements.",
|
| 232 |
+
"author": "M. Rudelson and R. Vershynin.",
|
| 233 |
+
"venue": "Comm. Pure Appl. Math., 61(8):1025\u20131045, 2008.",
|
| 234 |
+
"url": null
|
| 235 |
+
}
|
| 236 |
+
}
|
| 237 |
+
],
|
| 238 |
+
"url": "http://arxiv.org/html/2311.07889v2"
|
| 239 |
+
}
|
20241217/2311.14975v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2311.16900v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2312.16476v6.json
ADDED
|
@@ -0,0 +1,666 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "SVGDreamer: Text Guided SVG Generation with Diffusion Model",
|
| 3 |
+
"abstract": "Recently, text-guided scalable vector graphics (SVGs) synthesis has shown promise in domains such as iconography and sketch. However, existing text-to-SVG generation methods lack editability and struggle with visual quality and result diversity.\nTo address these limitations, we propose a novel text-guided vector graphics synthesis method called SVGDreamer.\nSVGDreamer incorporates a semantic-driven image vectorization (SIVE) process that enables the decomposition of synthesis into foreground objects and background, thereby enhancing editability.\nSpecifically, the SIVE process introduces attention-based primitive control and an attention-mask loss function for effective control and manipulation of individual elements.\nAdditionally, we propose a Vectorized Particle-based Score Distillation (VPSD) approach to address issues of shape over-smoothing, color over-saturation, limited diversity, and slow convergence of the existing text-to-SVG generation methods by modeling SVGs as distributions of control points and colors. Furthermore, VPSD leverages a reward model to re-weight vector particles, which improves aesthetic appeal and accelerates convergence.\nExtensive experiments are conducted to validate the effectiveness of SVGDreamer, demonstrating its superiority over baseline methods in terms of editability, visual quality, and diversity.\nProject page: https://ximinng.github.io/SVGDreamer-project/",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Scalable Vector Graphics (SVGs) represent visual concepts using geometric primitives such as B\u00e9zier curves, polygons, and lines. Due to their inherent nature, SVGs are highly suitable for visual design applications, such as posters and logos.\nSecondly, compared to raster images, vector images can maintain compact file sizes, making them more efficient for storage and transmission purposes. More importantly, vector images offer greater editability, allowing designers to easily select, modify, and compose elements. This attribute is particularly crucial in the design process, as it allows for seamless adjustments and creative exploration.\nIn recent years, there has been a growing interest in general vector graphics generation. Various optimization-based methods [4 ###reference_b4###, 28 ###reference_b28###, 19 ###reference_b19###, 40 ###reference_b40###, 41 ###reference_b41###, 34 ###reference_b34###, 12 ###reference_b12###, 48 ###reference_b48###] have been proposed, building upon the differentiable rasterizer DiffVG [14 ###reference_b14###]. These methods, such as CLIPDraw [4 ###reference_b4###] and VectorFusion [12 ###reference_b12###], differ primarily in their approach to supervision.\nSome works [4 ###reference_b4###, 28 ###reference_b28###, 19 ###reference_b19###, 34 ###reference_b34###, 40 ###reference_b40###, 41 ###reference_b41###] combine the CLIP model [23 ###reference_b23###] with DiffVG [14 ###reference_b14###], using CLIP as a source of supervision.\nMore recently, the significantly progress achieved by Text-to-Image (T2I) diffusion models [20 ###reference_b20###, 26 ###reference_b26###, 24 ###reference_b24###, 27 ###reference_b27###, 37 ###reference_b37###] has inspired the task of text-to-vector-graphics. Both VectorFusion [12 ###reference_b12###] and DiffSketcher [48 ###reference_b48###] attempted to utilize T2I diffusion models for supervision. These models make use of the high-quality raster images generated by T2I models as targets to optimize the parameters of vector images. Additionally, the priors embedded within T2I models can be distilled and applied in this task.\nConsequently, models that use T2I for supervision generally perform better than those using the CLIP model.\nDespite their impressive performance, existing T2I-based methods have certain limitations. Firstly, the vector images generated by these methods lack editability. Unlike the conventional approach of creating vector graphics, where individual elements are added one by one, T2I-based methods do not distinguish between different components during synthesis. As a result, the objects become entangled, making it challenging to edit or modify a single object independently.\nSecondly, there is still a large room for improvement in visual quality and diversity of the results generated by these methods.\nBoth VectorFusion [12 ###reference_b12###] and DiffSketcher [48 ###reference_b48###] extended the Score Distillation Sampling (SDS) [22 ###reference_b22###] to distill priors from the T2I models.\nHowever, it has been observed that SDS can lead to issues such as color over-saturation and over-smoothing, resulting in a lack of fine details in the generated vector images.\nBesides, SDS optimizes a set of control points in the vector graphic space to obtain the average state of the vector graphic corresponding to the text prompt in a mode-seeking manner [22 ###reference_b22###].\nThis leads to a lack of diversity and detailed construction in the SDS-based approach [12 ###reference_b12###, 48 ###reference_b48###], along with absent text prompt objects.\nTo address the aforementioned issues, we present a new model called SVGDreamer for text-guided vector graphics generation. Our primary objective is to produce vector graphics of superior quality that offer enhanced editability, visual appeal, and diversity.\nTo ensure editability, we propose a semantic-driven image vectorization (SIVE) process. This approach incorporates an innovative attention-based primitive control strategy, which facilitates the decomposition of the synthesis process into foreground objects and background.\nTo initialize the control points for each foreground object and background, we leverage cross-attention maps queried by text tokens.\nFurthermore, we introduce an attention-mask loss function, which optimizes the graphic elements hierarchically. The proposed SIVE process ensures the separation and editability of the individual elements, promoting effective control and manipulation of the resulting vector graphics.\nTo improve the visual quality and diversity of the generated vector graphics, we introduce Vectorized Particle-based Score Distillation (VPSD) for vector graphics refinement.\nPrevious works in vector graphics synthesis [12 ###reference_b12###, 48 ###reference_b48###, 11 ###reference_b11###] that utilized SDS often encountered issues like shape over-smoothing, color over-saturation, limited diversity, and slow convergence in synthesized results [22 ###reference_b22###, 48 ###reference_b48###].\nTo address these issues, VPSD models SVGs as distributions of control points and colors, respectively.\nVPSD adopts a LoRA [10 ###reference_b10###] network to estimate these distributions, aligning vector graphics with the pretrained diffusion model.\nFurthermore, to enhance the aesthetic appeal of the generated vector graphics, we integrate ReFL [49 ###reference_b49###] to fine-tune the estimation network. Through this refinement process, we achieve final vector graphics that exhibit high editability, superior visual quality, and increased diversity.\nTo validate the effectiveness of our proposed method, we perform extensive experiments to evaluate the model across multiple aspects. In summary, our contributions can be summarized as follows:\nWe introduce SVGDreamer, a novel model for text-to-SVG generation. This novel model is capable of generating high-quality vector graphics while preserving editability.\nWe present the semantic-driven image vectorization (SIVE) method, which ensures that the generated vector objects are separate and flexible to edit. Additionally, we propose the vectorized particle-based score distillation (VPSD) loss to guarantee that the generated vector graphics exhibit both exceptional visual quality and a wide range of diversity.\nWe conduct comprehensive experiments to evaluate the effectiveness of our proposed method. Results demonstrate the superiority of our approach compared to baseline methods. Moreover, our model showcases strong generalization capabilities in generating diverse types of vector graphics.\n###figure_1###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Vector Graphics Generation",
|
| 21 |
+
"text": "Scalable Vector Graphics (SVGs) offer a declarative format for visual concepts expressed as primitives. One approach to creating SVG content is to use Sequence-To-Sequence (seq2seq) models to generate SVGs [5 ###reference_b5###, 16 ###reference_b16###, 1 ###reference_b1###, 25 ###reference_b25###, 43 ###reference_b43###, 44 ###reference_b44###, 46 ###reference_b46###].\nThese methods heavily rely on dataset in vector form, which limits their generalization ability and their capacity to synthesize complex vector graphics. Instead of directly learning an SVG generation network, an alternative method of vector synthesis is to optimize towards a matching image during evaluation time.\nLi et al. [14 ###reference_b14###] introduce a differentiable rasterizer that bridges the vector graphics and raster image domains. While image generation methods that traditionally operate over vector graphics require a vector-based dataset, recent work has demonstrated the use of differentiable renderers to overcome this limitation [30 ###reference_b30###, 39 ###reference_b39###, 25 ###reference_b25###, 28 ###reference_b28###, 17 ###reference_b17###, 38 ###reference_b38###, 36 ###reference_b36###, 48 ###reference_b48###]. Furthermore, recent advances in visual text embedding contrastive language-image pre-training model (CLIP) [23 ###reference_b23###] have enabled a number of successful methods for synthesizing sketches, such as CLIPDraw[4 ###reference_b4###], CLIP-CLOP [19 ###reference_b19###], and CLIPasso [40 ###reference_b40###]. A very recent work VectorFusion [12 ###reference_b12###] and DiffSketcher [48 ###reference_b48###] combine differentiable renderer with text-to-image diffusion model for vector graphics generation, resulting in promising results in fields such as iconography, pixel art, and sketch."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Text-to-Image Diffusion Model",
|
| 27 |
+
"text": "Denoising diffusion probabilistic models (DDPMs) [31 ###reference_b31###, 33 ###reference_b33###, 8 ###reference_b8###, 35 ###reference_b35###], particularly those conditioned on text, have shown promising results in text-to-image synthesis. For example, Classifier-Free Guidance (CFG) [7 ###reference_b7###] has improved visual quality and is widely used in large-scale text conditional diffusion model frameworks, including GLIDE [20 ###reference_b20###], Stable Diffusion [26 ###reference_b26###], DALL E 2 [24 ###reference_b24###], Imagen [27 ###reference_b27###] and DeepFloyd IF [37 ###reference_b37###].\nThe progress achieved by text-to-image diffusion models [20 ###reference_b20###, 26 ###reference_b26###, 24 ###reference_b24###, 27 ###reference_b27###] also promotes the development of a series of text-guided tasks, such as text-to-3D [22 ###reference_b22###]. In this work, we employ Stable Diffusion model to provide supervision for text-to-SVG generation."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Score Distillation Sampling",
|
| 33 |
+
"text": "Recent advances in natural image modeling have sparked significant research interest in utilizing powerful 2D pretrained models to recover 3D object structures [18 ###reference_b18###, 21 ###reference_b21###, 42 ###reference_b42###, 15 ###reference_b15###, 22 ###reference_b22###, 45 ###reference_b45###].\nRecent efforts such as DreamFusion [22 ###reference_b22###], Magic3D [15 ###reference_b15###] and Score Jacobian Chaining [42 ###reference_b42###] explore text-to-3D generation by exploiting a score distillation sampling (SDS) loss derived from a 2D text-to-image diffusion model [27 ###reference_b27###, 26 ###reference_b26###] instead, showing impressive results.\nThe development of text-to-SVG [12 ###reference_b12###, 48 ###reference_b48###] was inspired by this, but the resulting vector graphics have limited quality and exhibit a similar over-smoothness as the reconstructed 3D models.\nWang et al. [45 ###reference_b45###] extend the modeling of the 3D model as a random variable instead of a constant as in SDS and present variational score distillation to address the over-smoothing issues in text-to-3D generation."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Methodology",
|
| 39 |
+
"text": "In this section, we introduce SVGDreamer, an optimization-based method that creates a variety of vector graphics based on text prompts.\nWe define a vector graphic as a set of paths and color attributes .\nEach path consists of control points and one color attribute .\nWe optimize an SVG by back-propagating gradients of rasterized images to SVG path parameters via a differentiable renderer [14 ###reference_b14###].\nOur approach leverages the text-to-image diffusion model prior to guide the differentiable renderer and optimize the parametric graphic path , resulting in the synthesis of vector graphs that match the description of the text prompt .\nAs illustrated in Fig. 2 ###reference_###, our pipeline consists of two parts: semantic-driven image vectorization and SVG synthesis through VPSD optimization.\nThe first part is Semantic-driven Image VEctorization (SIVE), consisting of two stages: primitive initialization and semantic-aware optimization.\nWe rethink the application of attention mechanisms in synthesizing vector graphics.\nWe extract the cross-attention maps corresponding to different objects in the diffusion model and apply it to initialize control points and consolidate object vectorization.\nThis process allows us to decompose the foreground objects from the background.\nConsequently, the SIVE process generates vector objects which are independently editable. It separates vector objects by aggregating the curves that form them, which in turn simplifies the combination of vector graphics.\nIn Sec. 3.2 ###reference_###, we propose the Vectorized Particle-based Score Distillation (VPSD) to generate diverse high-quality text-matching vector graphics.\nVPSD is designed to model the distribution of vector path control points and colors for approximating the vector parameter distribution, thus obtaining vector results of diversity.\n###figure_2###"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "SIVE: Semantic-driven Image Vectorization",
|
| 45 |
+
"text": "Image rasterization is a mature technique in computer graphics, while image vectorization, the reverse path of rasterization, remains a major challenge.\nGiven an arbitrary input image, LIVE [17 ###reference_b17###] recursively learns the visual concepts by adding new optimizable closed B\u00e9zier paths and optimizing all these paths.\nHowever, LIVE [17 ###reference_b17###] struggles with grasping and distinguishing various subjects within an image, leading to identical paths being superimposed onto different visual subjects. And the LIVE-based method [17 ###reference_b17###, 12 ###reference_b12###] fails to represent intricate vector graphics consisting of complex paths. We propose a semantic-driven image vectorization method to address the aforementioned issue. This method consists of two main stages: primitive initialization and semantic-aware optimization.\nIn the initialization stage, we allocate distinct control points to different regions corresponding to various visual objects with the guidance of attention maps.\nIn the optimization stage, we introduce an attention-based mask loss function to hierarchically optimize the vector objects."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1.1",
|
| 49 |
+
"parent_section_id": "3.1",
|
| 50 |
+
"section_name": "3.1.1 Primitive Initialization",
|
| 51 |
+
"text": "Vectorizing visual objects often involves assigning numerous paths, which leads to object-layer confusion in LIVE-based methods.\nTo address this issue, we suggest organizing vector graphic elements semantically and assigning paths to objects based on their semantics.\nWe initialize groups of object-level control points according to the cross-attention map corresponding to different objects in the text prompt.\nAnd we represent them as the foreground , where indicates the -th token in the text prompt.\nCorrespondingly, the rest will be treated as background.\nSuch design allows us to represent the attention maps of background and foreground as,\nwhere indicates the attention map of the background.\n indicates cross-attention score, where indicates -th token keys from text prompt, is pixel queries features, and is the latent projection dimension of the keys and queries.\nThen, inspired by DiffSketcher [48 ###reference_b48###], we normalize the attention maps using softmax and treat it as a distribution map to sample positions for the first control point of each B\u00e9zier curve.\nThe other control points () are sampled within a small radius (0.05 of image size) around to define the initial set of paths."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.1.2",
|
| 55 |
+
"parent_section_id": "3.1",
|
| 56 |
+
"section_name": "3.1.2 Semantic-aware Optimization",
|
| 57 |
+
"text": "In this stage, we utilize an attention-based mask loss to separately optimize the objects in the foreground and background.\nThis ensures that control points remain within their respective regions, aiding in object decomposition.\nNamely, the hierarchy only exists within the designated object and does not get mixed up with other objects.\nThis strategy fuels the permutations and combinations between objects that form different vector graphics, and enhances the editability of the objects themselves.\nSpecifically, we convert the attention map obtained during the initialization stage into reusable masks , foregrounds and one background mask in total. We do this by setting the attention score to 1 if it is greater than the threshold value, and to 0 otherwise.\nwhere is the target image, is mask, is the rendering."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Vectorized Particle-based Score Distillation",
|
| 63 |
+
"text": "While vectorizing a rasterized diffusion sample is lossy, recent techniques [12 ###reference_b12###, 48 ###reference_b48###] have identified the SDS loss [22 ###reference_b22###] as beneficial for our task of generating vector graphics.\nTo synthesize a vector image that matches a given text prompt , they directly optimize the parameters of a differentiable rasterizer via SDS loss.\nAt each iteration, the differentiable rasterizer is used to render a raster image , which is augmented to obtain a .\nThen, the pretrained latent diffusion model (LDM) uses a VAE encoder [3 ###reference_b3###] to encode into a latent representation , where and is the encoder downsample factor.\nFinally, the gradient of SDS is estimated by,\nwhere is the weighting function. And noised to form .\nUnfortunately, SDS-based methods often suffer from issues such as shape over-smoothing, color over-saturation, limited diversity in results, and slow convergence in synthesis results [22 ###reference_b22###, 12 ###reference_b12###, 48 ###reference_b48###, 11 ###reference_b11###].\n###figure_3### Inspired by the principled variational score distillation framework [45 ###reference_b45###], we propose vectorized particle-based score distillation (VPSD) to address the aforementioned issues.\nInstead of modeling SVGs as a set of control points and corresponding colors like SDS, we model SVGs as the distributions of control points and colors respectively. In principle, given a text prompt , there exists a probabilistic distribution of all possible vector shapes representations.\nUnder a vector representation parameterized by , such a distribution can be modeled as a probabilistic density .\nCompared with SDS that optimizes for the single , VPSD optimizes for the whole distribution , from which we can sample .\nMotivated by previous particle-based variational inference methods, we maintain groups of vector parameters as particles to estimate the distribution , and will be sampled from the optimal distribution if the optimization converges.\nThis optimization can be realized through two score functions: one that approximates the optimal distribution with a noisy real image, and one that represents the current distribution with a noisy rendered image.\nThe score function of noisy real images can be approximated by the pretrained diffusion model [26 ###reference_b26###] .\nThe score function of noisy rendered images is estimated by another noise prediction network , which is trained on the rendered images by .\nThe gradient of VPSD can be formed as,\nwhere and in indicate control point variables and color variables, the weighting function is a hyper-parameter. And .\nIn practice, as suggested by [45 ###reference_b45###], we parameterize using a LoRA (Low-rank adaptation [10 ###reference_b10###]) of the pretrained diffusion model.\nThe rendered image not only serves to calculate the VPSD gradient but also gets updated by LoRA,\nwhere is the Gaussian noise. Only the parameters of the LoRA model will be updated, while the parameters of other diffusion models will remain unchanged to minimize computational complexity.\nIn [45 ###reference_b45###], only randomly selected particles update the LoRA network in each iteration.\nHowever, this approach neglects the learning progression of vector particles, which are used to represent the optimal SVG distributions. Furthermore, these networks typically require numerous iterations to approximate the theoretical optimal distribution, resulting in slow convergence.\nIn VPSD, we introduce a Reward Feedback Learning method, as Fig. 3 ###reference_### illustrates. This method leverages a pre-trained reward model [49 ###reference_b49###] to assign reward scores to samples collected from LoRA model. Then LoRA model subsequently updates from these reweighted samples,\nwhere denotes the generated image of model with parameters corresponding to prompt , and represents the pretrained reward model [49 ###reference_b49###], represents reward-to-loss map function implemented by ReLU, and . We used the DDIM [32 ###reference_b32###] to rapidly sample samples during the early iteration stage.\nThis method saves 2 times the iteration step for VPSD convergence and improves the aesthetic score of the SVG by filtering out samples with low reward values in LoRA.\nOur final VPSD objective is then defined by the weighted average of the three terms,\nwhere indicates reward feedback strength.\n###figure_4###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "Vector Representation Primitives",
|
| 69 |
+
"text": "In addition to text prompts, SVGDreamer provides a variety of vector representations for style control.\nThese vector representations are achieved by limiting primitive types and their parameters.\nUsers can control the art style generated by SVGDreamer by modifying the input text or by constraining the set of primitives and parameters.\nWe explore six settings:\n1) Iconography is the most common SVG style, consists of several paths and their fill colors. This style allows for a wide range of compositions while maintaining a minimalistic expression. We utilize closed form B\u00e9zier curves with trainable control points and fill colors.\n2) Sketch is a way to convey information with minimal expression. We use open form B\u00e9zier curves with trainable control points and opacity.\n3) Pixel Art is a popular video-game inspired style, frequently used for character and background art. We use square SVG polygons with fill colors.\n4) Low-Poly is to consciously cut and pile up a certain number of simple geometric shapes according to the modeling laws of objects. We use square SVG polygons with trainable control points and fill colors.\n5) Painting is a means of approximating the painter\u2019s painting style in vector graphics. We use open form B\u00e9zier curves with trainable control points, stroke colors and stroke widths.\n6) Ink and Wash Painting is a traditional Chinese art form that utilizes varying concentrations of black ink. We use open form B\u00e9zier curves with trainable control points, opacity, and stroke widths."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Experiments",
|
| 75 |
+
"text": ""
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.1",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Qualitative Evaluation",
|
| 81 |
+
"text": "###figure_5### Figure 4 ###reference_### presents a qualitative comparison between SVGDreamer and existing text-to-SVG methods.\nCompared to CLIPDraw [4 ###reference_b4###], SVGDreamer synthesizes SVGs with higher fidelity and detail.\nWe also compare our work with SDS-based methods [12 ###reference_b12###, 48 ###reference_b48###], emphasizing our ability to address issues such as shape over-smoothing and color over-saturation.\nAs shown in the fifth column, SIVE achieves semantic decoupling but cannot overcome the inherently smooth nature of SDS.\nAs observed in the last two columns, our approach demonstrates superior detail compared to the SDS-based approach, regardless of whether the model was optimized from scratch or through the entire process. Consequently, this leads to a higher aesthetic score."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.2",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Quantitative Evaluation",
|
| 87 |
+
"text": "To demonstrate the effectiveness of our proposed method, we conducted comprehensive experiments to evaluate the model across various aspects, including Fr\u00e9chet Inception Distance (FID) [6 ###reference_b6###], Peak Signal-to-Noise Ratio (PSNR) [9 ###reference_b9###], CLIPScore [23 ###reference_b23###], BLIPScore [13 ###reference_b13###], Aesthetic score [29 ###reference_b29###] and Human Performance Score [47 ###reference_b47###] (HPS).\nTable 1 ###reference_### presents a comparison of our approach with the most representative text-to-SVG methods, including CLIPDraw [4 ###reference_b4###], VectorFusion [12 ###reference_b12###], and DiffSketcher [48 ###reference_b48###].\nWe conducted a quantitative evaluation of the six styles identified in Sec. 3.3 ###reference_###, with each style comprising 10 unique prompts and 50 synthesized SVGs per prompt.\nFor diversity evaluation of vector graphics and fill color saturation, we used SD sampling results as a Ground Truth (GT) and calculated FID and PSNR metrics respectively.\nThe quantitative analysis in the first two columns indicates that our method surpasses other methods in terms of FID and PSNR. This suggests that our method offers a greater range of diversity compared to SDS-based synthesis [12 ###reference_b12###, 48 ###reference_b48###].\nTo assess the consistency between the generated SVGs and the provided text prompts, we used both CLIPScore and BLIPScore.\nTo measure the perceptual quality of synthetic vector images, we measure aesthetic scores using the LAION aesthetic classifier [29 ###reference_b29###]. Besides, we use HPS to evaluate our approach from a human aesthetic perspective."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.3",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "Ablation Study",
|
| 93 |
+
"text": ""
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.3.1",
|
| 97 |
+
"parent_section_id": "4.3",
|
| 98 |
+
"section_name": "4.3.1 SIVE v.s. LIVE [17]",
|
| 99 |
+
"text": "###figure_6### LIVE [17 ###reference_b17###] offers a comprehensive image vectorization process that optimizes the vector graph in a hierarchical, layer-wise fashion.\nHowever, as Fig. 6 ###reference_### illustrates, LIVE struggles to accurately capture and distinguish between various subjects within an image, which can result in the same paths being superimposed on different visual subjects.\nWhen tasked with representing complex vector graphics requiring a greater number of paths, LIVE tends to superimpose path hierarchies across different objects, complicating the SVG representation and making it difficult to edit.\nThe resulting SVGs often contain complex and redundant shapes that can be inconvenient for further editing.\nIn contrast, SIVE is capable of generating succinct SVG forms with semantic-driven structures that align more closely with human perception. SIVE efficiently assigns paths to objects, enabling object-level vectorization."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.3.2",
|
| 103 |
+
"parent_section_id": "4.3",
|
| 104 |
+
"section_name": "4.3.2 VPSD v.s. LSDS [12, 11] v.s. ASDS [48]",
|
| 105 |
+
"text": "The development of text-to-SVG [12 ###reference_b12###, 48 ###reference_b48###] was inspired by DreamFusion [22 ###reference_b22###], but the resulting vector graphics have limited quality and exhibit a similar over-smoothness as the DreamFusion reconstructed 3D models.\nThe main distinction between ASDS and LSDS lies in the augmentation of the input data.\nAs demonstrated in Table 1 ###reference_### and Fig. 4 ###reference_###, our approach demonstrates superior performance compared to the SDS-based approach in terms of FID. This indicates that our method is able to maintain a higher level of diversity without being affected by mode-seeking disruptions. Additionally, our approach achieves a higher PSNR compared to the SDS-based approach, suggesting that our method avoids the issue of supersaturation caused by averaging colors."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.4",
|
| 109 |
+
"parent_section_id": "4",
|
| 110 |
+
"section_name": "Applications of SVGDreamer",
|
| 111 |
+
"text": "Our proposed tool, SVGDreamer, is capable of generating vector graphics with exceptional editability. Therefore, it can be utilized to create vector graphic assets for poster and logo design.\nAs shown in Fig. 5 ###reference_###, all graphic elements in the two poster examples are generated by our SVGDreamer.\nDesigners can easily recombine these elements with glyph to create unique posters. Additional examples of posters and logo designs can be found in Supplementary."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "5",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Conclusion",
|
| 117 |
+
"text": "In this work, we have introduced SVGDreamer, an innovative model for text-guided vector graphics synthesis. SVGDreamer incorporates two crucial technical designs: Semantic-Driven Image Vectorization (SIVE) and Vectorized Particle-Based Score Distillation (VPSD). These empower our model to generate vector graphics with high editability, superior visual quality, and notable diversity. SVGDreamer is expected to significantly advance the application of text-to-SVG models in the design field.\nLimitations.\nThe editability of our method, which depends on the text-to-image (T2I) model used, is currently limited. However, future advancements in T2I diffusion models could enhance the decomposition capabilities of our approach, thereby extending its editability. Moreover, exploring ways to automatically determine the number of control points at the SIVE object level is valuable.\nAcknowledgement.\u2003This work is supported by the CCF-Baidu Open Fund Project and Young Elite Scientists Sponsorship Program by CAST."
|
| 118 |
+
}
|
| 119 |
+
],
|
| 120 |
+
"appendix": [
|
| 121 |
+
{
|
| 122 |
+
"section_id": "Appendix x1",
|
| 123 |
+
"parent_section_id": null,
|
| 124 |
+
"section_name": "Overview",
|
| 125 |
+
"text": "###figure_7### This supplementary material is organized into several sections that provide additional details and analysis related to our work on SVGDreamer. Specifically, it will cover the following aspects:\nIn section A ###reference_###, we present additional qualitative results of SVGDreamer, demonstrating its ability to generate SVGs with high editability, visual quality, and diversity.\nIn section B ###reference_###, we demonstrate the potential applications of SVGDreamer in poster design and icon design.\nIn section C ###reference_###, we provide more implementation details of SVGDreamer.\nIn section D ###reference_###, We explain how to identify semantic objects in SIVE prompts.\nIn section E ###reference_###, we conduct additional ablation studies to demonstrate the effects of CFG weights (see Sec. E.1 ###reference_###), ReFL (see Sec. E.2 ###reference_###), the number of vector particles (see Sec. E.3 ###reference_###), and the number of paths (see Sec. E.4 ###reference_###).\nIn section F ###reference_###, we provide example results from using VPSD for raster image synthesis.\nIn section G ###reference_###, we show the pseudo code of SVGDreamer. Code is available now 111https://github.com/ximinng/SVGDreamer ###reference_###."
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"section_id": "Appendix 1",
|
| 129 |
+
"parent_section_id": null,
|
| 130 |
+
"section_name": "Appendix A Additional Qualitative Results",
|
| 131 |
+
"text": "Editability.\u2003Our tool, SVGDreamer, is designed to generate high-quality vector graphics with versatile editable properties, empowering users to efficiently reuse synthesized vector elements and create new vector graphics.\nIn our manuscript, Fig. 5 ###reference_### showcases two posters where each character is generated using SVGDreamer.\nAdditionally, we present further examples in Fig. 7 ###reference_###. These generated SVGs can be decomposed into background and foreground elements, which can then be recombined to create new SVGs.\nVisual Quality and Diversity.\u2003In Fig. 8 ###reference_###, we present additional examples generated by SVGDreamer, showcasing its ability to synthesize diverse object-level and scene-level vector graphics based on text prompts. Notably, our model can generate vector graphics with different styles, such as oil painting, watercolor, and sketch, by manipulating the type of primitives and text prompts. By incorporating the VPSD and ReFL into our model, SVGDreamer produces richer details compared to the state-of-the-art method VectorFusion.\nIt is important to highlight that our model can achieve different styles without relying on additional reference style images. Existing approaches for generating stylized vector graphics, such as StyleClipDraw, typically follow a style transfer pipeline used for raster images, which requires an additional style image as a reference. In contrast, SVGDreamer, being built upon a T2I model, can simply inject style information through text prompts. For instance, in the second example, we can obtain an oil painting in Van Gogh\u2019s style by using a text prompt."
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"section_id": "Appendix 2",
|
| 135 |
+
"parent_section_id": null,
|
| 136 |
+
"section_name": "Appendix B Applications of SVGDreamer",
|
| 137 |
+
"text": "In this section, we will demonstrate the utilization of SVGDreamer for synthesizing vector posters and icons.\nPoster Design.\u2003A poster is a large sheet used for advertising events, films, or conveying messages to people. It usually contains text and graphic elements. While existing T2I models have been developing rapidly, they still face challenges in text generation and control. On the other hand, SVG offers greater ease in text control. In Fig. 9 ###reference_###, we compare the posters generated by our SVGDreamer with those produced by four T2I models. It is important to note that all results generated by these T2I models are in raster format.\nWe will start by explaining the usage of our SVGDreamer tool for poster design. Initially, we employ SVGDreamer to generate graphic content. Then, we utilize modern font libraries to create vector fonts, taking advantage of SVG\u2019s transform properties to precisely control the font layout. Ultimately, we combine the vector images and fonts to produce comprehensive vector posters.\nTo be more specific, we employ the FreeType font library 222http://freetype.org/index.html ###reference_reetype.org/index.html### to represent glyphs using vectorized graphic outlines. In simpler terms, these glyph\u2019s outlines are composed of lines, B\u00e9zier curves, or B-Spline curves. This approach allows us to adjust and render the letters at any size, similar to other vector illustrations.\nThe joint optimization of text and graphic content for enhanced visual quality is left for future work.\nAs depicted in Fig. 9 ###reference_###, both Stable Diffusion [26 ###reference_b26###] (the first column) and DeepFloyd IF [37 ###reference_b37###] (the second column) display various text rendering errors, including missing glyphs, repeated or merged glyphs, and misshapen glyphs.\nGlyphControl [50 ###reference_b50###] (the third column) occasionally omits individual letters, and the fonts obscure content, resulting in areas where the fonts appear to lack content objects.\nTextDiffuser [2 ###reference_b2###] (the fifth column) is capable of generating fonts for different layouts, but it also suffers from the artifact of layout control masks, which disrupts the overall harmony of the content.\nIn contrast, posters created using our SVGDreamer are not restricted by resolution size, ensuring the text remains clear and legible. Moreover, our approach offers the convenience of easily editing both fonts and layout, providing a more flexible poster design approach.\n###figure_8### ###figure_9### Icon Design.\u2003In addition to posters, SVGDreamer can be applied in icon design (as shown in the Fig. 10 ###reference_###).\n###figure_10### We use SVGDreamer to obtain the graphic contents, and then create the polygon and circle layout by defining def tags in the SVG file. Then, we append the vector text paths to the end of the SVG file in order to obtain a complete vector icon."
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"section_id": "Appendix 3",
|
| 141 |
+
"parent_section_id": null,
|
| 142 |
+
"section_name": "Appendix C Implementation Details",
|
| 143 |
+
"text": "Our method is based on the pre-trained Stable Diffusion model [26 ###reference_b26###]. We use the Adam optimizer with , , for optimizing SVG path parameters .\nWe use a learning rate warm-up strategy. In the first 50 iterations, we gradually increase the control point learning rate from 0.01 to 0.9, and then employ exponential decay from 0.8 to 0.4 in the remaining 650 iterations (a total of 700 iterations).\nFor the color learning rate, we set it to 0.1 and the stroke width learning rate to 0.01.\nWe adopt AdamW optimizer with , , , for the training of LoRA [10 ###reference_b10###] parameters.\nIn the majority of our experiments, we set the particle number to 6, which means that 6 particles participate in the VPSD (Sec. 3.2 ###reference_###), LoRA update, and ReFL update simultaneously.\nTo ensure diversity and fidelity to text prompts in the synthesized SVGs, while maintaining rich details, we set the guidance scale of the Classifier-free Guidance (CFG [7 ###reference_b7###]) to 7.5.\nDuring the optimization process, SVGDreamer requires at least 31 GB memory on an Nvidia-V100 GPU to produce 6 SVGs.\nSynthesizing flat iconographic vectors, we allow path control points and fill colors to be optimized. During the course of optimization, many paths learn low opacity or shrink to a small area and are unused.\nTo encourage usage of paths and therefore more diverse and detailed images, motivated by VectorFusion [12 ###reference_b12###], we periodically reinitialize paths with fill-color opacity or area below a threshold. Reinitialized paths are removed from optimization and the SVG, and recreated as a randomly located and colored circle on top of existing paths."
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"section_id": "Appendix 4",
|
| 147 |
+
"parent_section_id": null,
|
| 148 |
+
"section_name": "Appendix D Object Identification in SIVE Prompts",
|
| 149 |
+
"text": "###figure_11### It is common for multiple nouns within a sentence to refer to the same object. We present two examples in Fig. 11 ###reference_###.\nIn our experiments, we did not employ a specific selection strategy because the cross-attention maps for such nouns-for example, \u201cman\u201d and \u201castronaut\u201d \u2013 are very similar. Therefore, choosing either \u201cman\u201d or \u201castronaut\u201d produces similar results with our method. For more precise control, users may utilize the cross-attention maps of the text prompt to identify the desired objects.\nIn SIVE, users can use visual text prompts to identify semantic objects."
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"section_id": "Appendix 5",
|
| 153 |
+
"parent_section_id": null,
|
| 154 |
+
"section_name": "Appendix E Additional Ablation Studies",
|
| 155 |
+
"text": "Next, we provide additional ablation experiments to demonstrate the effectiveness of the proposed components.\n###figure_12### In this section, we explore how Classifier-free Guidances (CFG) [7 ###reference_b7###] affects the diversity of generated results.\nFor VPSD, we set the number of particles as 6 and run experiments with different CFG values.\nFor LSDS [12 ###reference_b12###], we run 4 times of generation with different random seeds.\nThe results are shown in Fig. 12 ###reference_###. As shown in the figure, smaller CFG provides more diversity.\nWe conjecture that this is because the distribution of smaller guidance weights has more diverse modes. However, when the CFG becomes too small (e.g., CFG= 2), it cannot provide enough guidance to generate reasonable results.\nTherefore, in our implementation, we set CFG to 7.5 as a trade-off between diversity and optimization stability.\nNote that SDS-based methods [12 ###reference_b12###, 48 ###reference_b48###] do not work well in such small CFG weights.\nInstead, our VPSD provides a trade-off option between CFG weight and diversity, and it can generate more diverse results by simply setting a smaller CFG.\n###figure_13### In [45 ###reference_b45###], only selected particles update the LoRA network in each iteration. However, this approach neglects the learning progression of LoRA networks, which are used to represent variational distributions. These networks typically require numerous iterations to approximate the optimal distribution, resulting in slow convergence. Unfortunately, the randomness introduced by particle initialization can lead to early learning of sub-optimal particles, which adversely affects the final convergence result.\nIn VPSD, we introduce a Reward Feedback Learning (ReFL) method. This method leverages a pre-trained reward model [49 ###reference_b49###] to assign reward scores to samples collected from LoRA model. Then LoRA model subsequently updates from these reweighted samples.\nAs indicated in Table 2 ###reference_###, this led to a significant reduction in the number of iterations by almost 50%, resulting in a 50% decrease in optimization time.\nAnd improves the aesthetic score of the SVG by filtering out samples with low reward values in LoRA.\nFiltering out samples with low reward values, as demonstrated in Table 1 ###reference_###, enhances the aesthetic score of the SVG.\nThe visual improvements brought by ReFL are illustrated in Fig. 13 ###reference_###.\n###figure_14### We investigate the impact of the number of particles on the generated results. We vary the number of particles in 1, 4, 8, 16 and analyze how this variation affects the outcomes.\nThe CFG of VPSD is set as 7.5.\nAs shown in Fig. 14 ###reference_###, the diversity of the generated results is slightly larger as the number of particles increases. Meanwhile, the quality of generated results is not significantly affected by the number of particles.\nConsidering the high computation overhead associated with optimizing vector primitive representations and the limitations imposed by available computation resources, we limit our testing to a maximum of 6 particles.\nThis subsection analyzes the effect of different stroke numbers on VPSD synthetic vector images. Figure 15 ###reference_### shows examples with 128, 256, 512, and 768 paths, from top to bottom, using Iconography primitives. As the path count increases, the image transitions from abstract to more concrete, and the level of detail notably improves. VPSD offers superior visual details compared to SDS, including aspects like water reflections. Additionally, VPSD better aligns with text prompts.\n###figure_15### ###figure_16###"
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"section_id": "Appendix 6",
|
| 159 |
+
"parent_section_id": null,
|
| 160 |
+
"section_name": "Appendix F VPSD for 2D Image Synthesis",
|
| 161 |
+
"text": "In this work, VPSD is specifically designed for text-to-SVG generation; however, it can also be adapted for 2D image synthesis.\nAs illustrated in Fig. 16 ###reference_###, images synthesized by VSD may exhibit displaced or incomplete object layouts, resulting in samples that might not meet human aesthetic preferences. In contrast, VPSD integrates a reward score within its feedback learning process, which significantly enhances the quality of the generated images."
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"section_id": "Appendix 7",
|
| 165 |
+
"parent_section_id": null,
|
| 166 |
+
"section_name": "Appendix G Algorithm for VPSD",
|
| 167 |
+
"text": "We summarize the algorithm of Vectorized Particle-based Score Distillation (VPSD) in Algorithm 1 ###reference_###.\nFirst, VPSD initializes groups of SVG parameters, a pretrained diffusion model parameterized by and the LoRA layers parameterized by , as the pretrained reward model .\nNote that only the diffusion model is pretrained with frozen parameters, while LoRA [10 ###reference_b10###] thaws some of its parameters.\nSubsequently, VPSD randomly selects a parameter from the set of SVG parameters and generates a raster image based on this selection. The parameter is then updated using Variational Score Distillation (VSD). samples are sampled using and utilized to update the parameters of .\nThis process is repeated until a satisfactory result is obtained and the algorithm returns groups of SVG parameters as the final output.\nAlgorithm 2 ###reference_### is the combination of VPSD and SIVE (Semantic-driven Image Vectorizatio). This algorithm has the same initialization as VPSD, but it needs to get a sample using diffusion model given text prompt . In the sampling process, it can obtain the sample\u2019s corresponding attention map. Depending on attention map, the algorithm can get background mask and foreground mask. It optimizes the SVG parameters according to the foreground mask and background mask, respectively, and then fine-tunes them using the VPSD algorithm."
|
| 168 |
+
}
|
| 169 |
+
],
|
| 170 |
+
"tables": {
|
| 171 |
+
"1": {
|
| 172 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.8.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.9.2\" style=\"font-size:90%;\">Quantitative evaluation of various Text-to-SVG methods.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.6\" style=\"width:433.6pt;height:43.7pt;vertical-align:-0.3pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-503.1pt,50.3pt) scale(0.301164300065778,0.301164300065778) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.6.6\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T1.6.6.6.7\">Method / Metric</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.1.1.1.1\">FID\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib6\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">6</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.2.2.2.2\">PSNR\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib9\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">9</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.3.3.3\">CLIPScore\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib23\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">23</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.4.4.4\">BLIPScore\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib13\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">13</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.5.5.5.5\">Aesthetic\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib29\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">29</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.6.6.6.6\">HPS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib47\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">47</span></a>]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.7.1\">CLIPDraw\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib4\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">4</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.7.2\">160.64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.7.3\">8.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.7.4\">0.2486</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.7.5\">0.3933</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.7.6\">3.9803</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.7.7\">0.2347</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.6.6.8.1\">VectorFusion (scratch)\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib12\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">12</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.8.2\">119.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.8.3\">6.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.8.4\">0.2298</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.8.5\">0.3803</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.8.6\">4.5165</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.8.7\">0.2334</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.6.6.9.1\">VectorFusion\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib12\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">12</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.2\">100.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.3\">8.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.4\">0.2720</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.5\">0.4291</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.6\">4.9845</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.7\">0.2450</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.6.6.10.1\">DiffSketcher(RGB)\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.16476v6#bib.bib48\" title=\"\"><span class=\"ltx_text\" style=\"font-size:90%;\">48</span></a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.2\">118.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.3\">6.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.4\">0.2402</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.5\">0.4185</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.6\">4.1562</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.7\">0.2423</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.11.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.11.1.1\">SVGDreamer</span> (from scratch)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.11.2\">84.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.11.3\">10.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.11.4\">0.2951</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.11.5\">0.4311</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.11.6\">5.1822</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.11.7\">0.2484</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T1.6.6.12.1\">+Reward Feedback</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.12.2\">83.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.12.3\">10.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.12.4\">0.2988</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.12.5\">0.4335</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.12.6\">5.2825</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.12.7\">0.2559</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.13.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.1.1\">SVGDreamer</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.13.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.2.1\">59.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.13.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.3.1\">14.54</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.13.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.4.1\">0.3001</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.13.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.5.1\">0.4623</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.13.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.6.1\">5.5432</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.13.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.7.1\">0.2685</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 173 |
+
"capture": "Table 1: Quantitative evaluation of various Text-to-SVG methods."
|
| 174 |
+
},
|
| 175 |
+
"2": {
|
| 176 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:144%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A5.T2.4.1.1\" style=\"font-size:63%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"A5.T2.5.2\" style=\"font-size:63%;\">Efficiency of our proposed ReFL in SVGDreamer.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A5.T2.6\" style=\"width:433.6pt;height:80.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-26.8pt,4.9pt) scale(0.890169292960301,0.890169292960301) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A5.T2.6.1\">\n<tr class=\"ltx_tr\" id=\"A5.T2.6.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A5.T2.6.1.1.1\"><span class=\"ltx_text\" id=\"A5.T2.6.1.1.1.1\" style=\"font-size:144%;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A5.T2.6.1.1.2\"><span class=\"ltx_text\" id=\"A5.T2.6.1.1.2.1\" style=\"font-size:144%;\">Canvas Size</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A5.T2.6.1.1.3\"><span class=\"ltx_text\" id=\"A5.T2.6.1.1.3.1\" style=\"font-size:144%;\">Path Number</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"A5.T2.6.1.1.4\"><span class=\"ltx_text\" id=\"A5.T2.6.1.1.4.1\" style=\"font-size:144%;\">Iteration Steps</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A5.T2.6.1.1.5\"><span class=\"ltx_text\" id=\"A5.T2.6.1.1.5.1\" style=\"font-size:144%;\">Time(min:sec)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T2.6.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.2.1\"><span class=\"ltx_text\" id=\"A5.T2.6.1.2.1.1\" style=\"font-size:144%;\">W/O ReFL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.2.2\"><span class=\"ltx_text\" id=\"A5.T2.6.1.2.2.1\" style=\"font-size:144%;\">224 * 224</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.2.3\"><span class=\"ltx_text\" id=\"A5.T2.6.1.2.3.1\" style=\"font-size:144%;\">128</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.2.4\"><span class=\"ltx_text\" id=\"A5.T2.6.1.2.4.1\" style=\"font-size:144%;\">500</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T2.6.1.2.5\"><span class=\"ltx_text\" id=\"A5.T2.6.1.2.5.1\" style=\"font-size:144%;\">13m15s</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T2.6.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.3.1\"><span class=\"ltx_text\" id=\"A5.T2.6.1.3.1.1\" style=\"font-size:144%;\">W ReFL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.3.2\"><span class=\"ltx_text\" id=\"A5.T2.6.1.3.2.1\" style=\"font-size:144%;\">224 * 224</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.3.3\"><span class=\"ltx_text\" id=\"A5.T2.6.1.3.3.1\" style=\"font-size:144%;\">128</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.3.4\"><span class=\"ltx_text\" id=\"A5.T2.6.1.3.4.1\" style=\"font-size:144%;\">300</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T2.6.1.3.5\"><span class=\"ltx_text\" id=\"A5.T2.6.1.3.5.1\" style=\"font-size:144%;\">6m45s</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T2.6.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.4.1\"><span class=\"ltx_text\" id=\"A5.T2.6.1.4.1.1\" style=\"font-size:144%;\">W/O ReFL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.4.2\"><span class=\"ltx_text\" id=\"A5.T2.6.1.4.2.1\" style=\"font-size:144%;\">600 * 600</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.4.3\"><span class=\"ltx_text\" id=\"A5.T2.6.1.4.3.1\" style=\"font-size:144%;\">256</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.4.4\"><span class=\"ltx_text\" id=\"A5.T2.6.1.4.4.1\" style=\"font-size:144%;\">500</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T2.6.1.4.5\"><span class=\"ltx_text\" id=\"A5.T2.6.1.4.5.1\" style=\"font-size:144%;\">14m21s</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T2.6.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.5.1\"><span class=\"ltx_text\" id=\"A5.T2.6.1.5.1.1\" style=\"font-size:144%;\">W ReFL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.5.2\"><span class=\"ltx_text\" id=\"A5.T2.6.1.5.2.1\" style=\"font-size:144%;\">600 * 600</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.5.3\"><span class=\"ltx_text\" id=\"A5.T2.6.1.5.3.1\" style=\"font-size:144%;\">256</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"A5.T2.6.1.5.4\"><span class=\"ltx_text\" id=\"A5.T2.6.1.5.4.1\" style=\"font-size:144%;\">300</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A5.T2.6.1.5.5\"><span class=\"ltx_text\" id=\"A5.T2.6.1.5.5.1\" style=\"font-size:144%;\">7m21s</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 177 |
+
"capture": "Table 2: Efficiency of our proposed ReFL in SVGDreamer."
|
| 178 |
+
}
|
| 179 |
+
},
|
| 180 |
+
"image_paths": {
|
| 181 |
+
"1": {
|
| 182 |
+
"figure_path": "2312.16476v6_figure_1.png",
|
| 183 |
+
"caption": "Figure 1: \nGiven a text prompt, SVGDreamer can generate a variety of vector graphics. SVGDreamer is a versatile tool that can work with various vector styles without being limited to a specific prompt suffix. We utilize various colored suffixes to indicate different styles. The style is governed by vector primitives.",
|
| 184 |
+
"url": "http://arxiv.org/html/2312.16476v6/x1.png"
|
| 185 |
+
},
|
| 186 |
+
"2": {
|
| 187 |
+
"figure_path": "2312.16476v6_figure_2.png",
|
| 188 |
+
"caption": "Figure 2: \nOverview of SVGDreamer. The method consists of two parts: semantic-driven image vectorization (SIVE, Sec. 3.1) and SVG synthesis through VPSD optimization (Sec. 3.2). The result obtained from SIVE can be used as input of VPSD for further refinement.",
|
| 189 |
+
"url": "http://arxiv.org/html/2312.16476v6/extracted/6073342/img/pipe.png"
|
| 190 |
+
},
|
| 191 |
+
"3": {
|
| 192 |
+
"figure_path": "2312.16476v6_figure_3.png",
|
| 193 |
+
"caption": "Figure 3: \nThe process of Vectorized Particle-based Score Distillation.\nVPSD allows k\ud835\udc58kitalic_k SVGs as input and simultaneously optimizes k\ud835\udc58kitalic_k sets of SVG parameters.",
|
| 194 |
+
"url": "http://arxiv.org/html/2312.16476v6/x2.png"
|
| 195 |
+
},
|
| 196 |
+
"4": {
|
| 197 |
+
"figure_path": "2312.16476v6_figure_4.png",
|
| 198 |
+
"caption": "Figure 4: \nQualitative comparison of different methods.\nNote that DiffSketcher was originally designed for vector sketch generation; therefore, we re-implemented it to generate RGB vector graphics.",
|
| 199 |
+
"url": "http://arxiv.org/html/2312.16476v6/x3.png"
|
| 200 |
+
},
|
| 201 |
+
"5": {
|
| 202 |
+
"figure_path": "2312.16476v6_figure_5.png",
|
| 203 |
+
"caption": "Figure 5: \nExamples of vector assets created by SVGDreamer.\nWe specify foreground content as an SVG asset through a text prompt.\nTo create assets that fit the SVG style, such as flat polygon vector, we constrain the vector representation via using a different prompt modifier to encourage the appropriate style: * \u2026 on a white background, full body action pose, complete body, concept art, flat 2d vector icon.",
|
| 204 |
+
"url": "http://arxiv.org/html/2312.16476v6/x4.png"
|
| 205 |
+
},
|
| 206 |
+
"6": {
|
| 207 |
+
"figure_path": "2312.16476v6_figure_6.png",
|
| 208 |
+
"caption": "Figure 6: \nComparison of LIVE vectorization with SIVE.\nIn the first row, \u201cForeground 1\u201d and \u201cForeground 2\u201d refer to Astronaut and Plants, respectively. Glyphs have been added manually and were not produced by our method. In the LIVE setup, we follow the protocol outlined in VectorFusion [12], which represents a vector image with 128 paths distributed across four layers, with 32 paths in each layer.",
|
| 209 |
+
"url": "http://arxiv.org/html/2312.16476v6/x5.png"
|
| 210 |
+
},
|
| 211 |
+
"7": {
|
| 212 |
+
"figure_path": "2312.16476v6_figure_7.png",
|
| 213 |
+
"caption": "Figure 7: \nExamples showcasing the editability of the results generated by our SVGDreamer.",
|
| 214 |
+
"url": "http://arxiv.org/html/2312.16476v6/x6.png"
|
| 215 |
+
},
|
| 216 |
+
"8": {
|
| 217 |
+
"figure_path": "2312.16476v6_figure_8.png",
|
| 218 |
+
"caption": "Figure 8: \nMore results generated by our SVGDreamer.\nThe style is governed by vector primitives.",
|
| 219 |
+
"url": "http://arxiv.org/html/2312.16476v6/x7.png"
|
| 220 |
+
},
|
| 221 |
+
"9": {
|
| 222 |
+
"figure_path": "2312.16476v6_figure_9.png",
|
| 223 |
+
"caption": "Figure 9: \nComparison of synthetic posters generated by different methods.\nThe input text prompts and glyphs to be added to the posters are displayed on the left side.",
|
| 224 |
+
"url": "http://arxiv.org/html/2312.16476v6/x8.png"
|
| 225 |
+
},
|
| 226 |
+
"10": {
|
| 227 |
+
"figure_path": "2312.16476v6_figure_10.png",
|
| 228 |
+
"caption": "Figure 10: \nExamples of synthetic icons.\nNote that the glyphs are manually added.",
|
| 229 |
+
"url": "http://arxiv.org/html/2312.16476v6/x9.png"
|
| 230 |
+
},
|
| 231 |
+
"11": {
|
| 232 |
+
"figure_path": "2312.16476v6_figure_11.png",
|
| 233 |
+
"caption": "Figure 11: \nVisualizations of the LDM cross-attention maps.",
|
| 234 |
+
"url": "http://arxiv.org/html/2312.16476v6/x10.png"
|
| 235 |
+
},
|
| 236 |
+
"12": {
|
| 237 |
+
"figure_path": "2312.16476v6_figure_12.png",
|
| 238 |
+
"caption": "Figure 12: \nAblation on how Classifier-free Guidances (CFG) [7] weight affects the randomness.\nSmaller CFG provides more diversity. But too small CFG provides less optimization stability. The prompt is \u201cA photograph of an astronaut riding a horse\u201d.",
|
| 239 |
+
"url": "http://arxiv.org/html/2312.16476v6/x11.png"
|
| 240 |
+
},
|
| 241 |
+
"13": {
|
| 242 |
+
"figure_path": "2312.16476v6_figure_13.png",
|
| 243 |
+
"caption": "Figure 13: \nEffect of the Reward Feedback Learning (ReFL).\nWhen employing ReFL, the visual quality of the generated results is significantly enhanced.",
|
| 244 |
+
"url": "http://arxiv.org/html/2312.16476v6/x12.png"
|
| 245 |
+
},
|
| 246 |
+
"14": {
|
| 247 |
+
"figure_path": "2312.16476v6_figure_14.png",
|
| 248 |
+
"caption": "Figure 14: \nAblation on the number of particles.\nThe diversity of the generated results is slightly larger as the number of particles increases. The quality of generated results is not significantly affected by the number of particles. The prompt is \u201cA photograph of an astronaut riding a horse\u201d.",
|
| 249 |
+
"url": "http://arxiv.org/html/2312.16476v6/x13.png"
|
| 250 |
+
},
|
| 251 |
+
"15": {
|
| 252 |
+
"figure_path": "2312.16476v6_figure_15.png",
|
| 253 |
+
"caption": "Figure 15: \nEffect of the number of paths.\nAdding vector paths can be synthesized to enhance SVG detail.",
|
| 254 |
+
"url": "http://arxiv.org/html/2312.16476v6/x14.png"
|
| 255 |
+
},
|
| 256 |
+
"16": {
|
| 257 |
+
"figure_path": "2312.16476v6_figure_16.png",
|
| 258 |
+
"caption": "Figure 16: \n2D image synthesis.\nComparison of the results from using VPSD and VSD for 2D image synthesis.",
|
| 259 |
+
"url": "http://arxiv.org/html/2312.16476v6/x15.png"
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
"validation": true,
|
| 263 |
+
"references": [
|
| 264 |
+
{
|
| 265 |
+
"1": {
|
| 266 |
+
"title": "Deepsvg: A hierarchical generative network for vector graphics animation.",
|
| 267 |
+
"author": "Alexandre Carlier, Martin Danelljan, Alexandre Alahi, and Radu Timofte.",
|
| 268 |
+
"venue": "Advances in Neural Information Processing Systems (NIPS), 33:16351\u201316361, 2020.",
|
| 269 |
+
"url": null
|
| 270 |
+
}
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"2": {
|
| 274 |
+
"title": "Textdiffuser: Diffusion models as text painters.",
|
| 275 |
+
"author": "Jingye Chen, Yupan Huang, Tengchao Lv, Lei Cui, Qifeng Chen, and Furu Wei.",
|
| 276 |
+
"venue": "arXiv preprint arXiv:2305.10855, 2023.",
|
| 277 |
+
"url": null
|
| 278 |
+
}
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"3": {
|
| 282 |
+
"title": "Taming transformers for high-resolution image synthesis.",
|
| 283 |
+
"author": "Patrick Esser, Robin Rombach, and Bjorn Ommer.",
|
| 284 |
+
"venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (NIPS), pages 12873\u201312883, 2021.",
|
| 285 |
+
"url": null
|
| 286 |
+
}
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"4": {
|
| 290 |
+
"title": "CLIPDraw: Exploring text-to-drawing synthesis through language-image encoders.",
|
| 291 |
+
"author": "Kevin Frans, Lisa Soros, and Olaf Witkowski.",
|
| 292 |
+
"venue": "In Advances in Neural Information Processing Systems (NIPS), 2022.",
|
| 293 |
+
"url": null
|
| 294 |
+
}
|
| 295 |
+
},
|
| 296 |
+
{
|
| 297 |
+
"5": {
|
| 298 |
+
"title": "A neural representation of sketch drawings.",
|
| 299 |
+
"author": "David Ha and Douglas Eck.",
|
| 300 |
+
"venue": "In International Conference on Learning Representations (ICLR), 2018.",
|
| 301 |
+
"url": null
|
| 302 |
+
}
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"6": {
|
| 306 |
+
"title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium.",
|
| 307 |
+
"author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.",
|
| 308 |
+
"venue": "Advances in neural information processing systems (NIPS), 30, 2017.",
|
| 309 |
+
"url": null
|
| 310 |
+
}
|
| 311 |
+
},
|
| 312 |
+
{
|
| 313 |
+
"7": {
|
| 314 |
+
"title": "Classifier-free diffusion guidance.",
|
| 315 |
+
"author": "Jonathan Ho and Tim Salimans.",
|
| 316 |
+
"venue": "arXiv preprint arXiv:2207.12598, 2022.",
|
| 317 |
+
"url": null
|
| 318 |
+
}
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"8": {
|
| 322 |
+
"title": "Denoising diffusion probabilistic models.",
|
| 323 |
+
"author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.",
|
| 324 |
+
"venue": "In Advances in Neural Information Processing Systems (NIPS), pages 6840\u20136851, 2020.",
|
| 325 |
+
"url": null
|
| 326 |
+
}
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"9": {
|
| 330 |
+
"title": "Image quality metrics: Psnr vs. ssim.",
|
| 331 |
+
"author": "Alain Hor\u00e9 and Djemel Ziou.",
|
| 332 |
+
"venue": "In 2010 20th International Conference on Pattern Recognition, pages 2366\u20132369, 2010.",
|
| 333 |
+
"url": null
|
| 334 |
+
}
|
| 335 |
+
},
|
| 336 |
+
{
|
| 337 |
+
"10": {
|
| 338 |
+
"title": "LoRA: Low-rank adaptation of large language models.",
|
| 339 |
+
"author": "Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.",
|
| 340 |
+
"venue": "In International Conference on Learning Representations (ICLR), 2022.",
|
| 341 |
+
"url": null
|
| 342 |
+
}
|
| 343 |
+
},
|
| 344 |
+
{
|
| 345 |
+
"11": {
|
| 346 |
+
"title": "Word-as-image for semantic typography.",
|
| 347 |
+
"author": "Shir Iluz, Yael Vinker, Amir Hertz, Daniel Berio, Daniel Cohen-Or, and Ariel Shamir.",
|
| 348 |
+
"venue": "ACM Transactions on Graphics (TOG), 42(4), 2023.",
|
| 349 |
+
"url": null
|
| 350 |
+
}
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"12": {
|
| 354 |
+
"title": "Vectorfusion: Text-to-svg by abstracting pixel-based diffusion models.",
|
| 355 |
+
"author": "Ajay Jain, Amber Xie, and Pieter Abbeel.",
|
| 356 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.",
|
| 357 |
+
"url": null
|
| 358 |
+
}
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"13": {
|
| 362 |
+
"title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation.",
|
| 363 |
+
"author": "Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi.",
|
| 364 |
+
"venue": "In International Conference on Machine Learning (ICML), pages 12888\u201312900. PMLR, 2022.",
|
| 365 |
+
"url": null
|
| 366 |
+
}
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"14": {
|
| 370 |
+
"title": "Differentiable vector graphics rasterization for editing and learning.",
|
| 371 |
+
"author": "Tzu-Mao Li, Michal Luk\u00e1\u010d, Gharbi Micha\u00ebl, and Jonathan Ragan-Kelley.",
|
| 372 |
+
"venue": "ACM Transactions on Graphics (TOG), 39(6):193:1\u2013193:15, 2020.",
|
| 373 |
+
"url": null
|
| 374 |
+
}
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"15": {
|
| 378 |
+
"title": "Magic3d: High-resolution text-to-3d content creation.",
|
| 379 |
+
"author": "Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin.",
|
| 380 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 300\u2013309, 2023.",
|
| 381 |
+
"url": null
|
| 382 |
+
}
|
| 383 |
+
},
|
| 384 |
+
{
|
| 385 |
+
"16": {
|
| 386 |
+
"title": "A learned representation for scalable vector graphics.",
|
| 387 |
+
"author": "Raphael Gontijo Lopes, David Ha, Douglas Eck, and Jonathon Shlens.",
|
| 388 |
+
"venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.",
|
| 389 |
+
"url": null
|
| 390 |
+
}
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"17": {
|
| 394 |
+
"title": "Towards layer-wise image vectorization.",
|
| 395 |
+
"author": "Xu Ma, Yuqian Zhou, Xingqian Xu, Bin Sun, Valerii Filev, Nikita Orlov, Yun Fu, and Humphrey Shi.",
|
| 396 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16314\u201316323, 2022.",
|
| 397 |
+
"url": null
|
| 398 |
+
}
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"18": {
|
| 402 |
+
"title": "Nerf: Representing scenes as neural radiance fields for view synthesis.",
|
| 403 |
+
"author": "Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng.",
|
| 404 |
+
"venue": "Communications of the ACM, 65(1):99\u2013106, 2021.",
|
| 405 |
+
"url": null
|
| 406 |
+
}
|
| 407 |
+
},
|
| 408 |
+
{
|
| 409 |
+
"19": {
|
| 410 |
+
"title": "Clip-clop: Clip-guided collage and photomontage.",
|
| 411 |
+
"author": "Piotr Mirowski, Dylan Banarse, Mateusz Malinowski, Simon Osindero, and Chrisantha Fernando.",
|
| 412 |
+
"venue": "arXiv preprint arXiv:2205.03146, 2022.",
|
| 413 |
+
"url": null
|
| 414 |
+
}
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"20": {
|
| 418 |
+
"title": "GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models.",
|
| 419 |
+
"author": "Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen.",
|
| 420 |
+
"venue": "In Proceedings of the 39th International Conference on Machine Learning (ICML), pages 16784\u201316804, 2022.",
|
| 421 |
+
"url": null
|
| 422 |
+
}
|
| 423 |
+
},
|
| 424 |
+
{
|
| 425 |
+
"21": {
|
| 426 |
+
"title": "Do 2d {gan}s know 3d shape? unsupervised 3d shape reconstruction from 2d image {gan}s.",
|
| 427 |
+
"author": "Xingang Pan, Bo Dai, Ziwei Liu, Chen Change Loy, and Ping Luo.",
|
| 428 |
+
"venue": "In International Conference on Learning Representations (ICLR), 2021.",
|
| 429 |
+
"url": null
|
| 430 |
+
}
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"22": {
|
| 434 |
+
"title": "Dreamfusion: Text-to-3d using 2d diffusion.",
|
| 435 |
+
"author": "Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall.",
|
| 436 |
+
"venue": "In The Eleventh International Conference on Learning Representations (ICLR), 2023.",
|
| 437 |
+
"url": null
|
| 438 |
+
}
|
| 439 |
+
},
|
| 440 |
+
{
|
| 441 |
+
"23": {
|
| 442 |
+
"title": "Learning transferable visual models from natural language supervision.",
|
| 443 |
+
"author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.",
|
| 444 |
+
"venue": "In International Conference on Machine Learning (ICML), pages 8748\u20138763. PMLR, 2021.",
|
| 445 |
+
"url": null
|
| 446 |
+
}
|
| 447 |
+
},
|
| 448 |
+
{
|
| 449 |
+
"24": {
|
| 450 |
+
"title": "Hierarchical text-conditional image generation with clip latents.",
|
| 451 |
+
"author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.",
|
| 452 |
+
"venue": "arXiv preprint arXiv:2204.06125, 2022.",
|
| 453 |
+
"url": null
|
| 454 |
+
}
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"25": {
|
| 458 |
+
"title": "Im2vec: Synthesizing vector graphics without vector supervision.",
|
| 459 |
+
"author": "Pradyumna Reddy, Michael Gharbi, Michal Lukac, and Niloy J Mitra.",
|
| 460 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7342\u20137351, 2021.",
|
| 461 |
+
"url": null
|
| 462 |
+
}
|
| 463 |
+
},
|
| 464 |
+
{
|
| 465 |
+
"26": {
|
| 466 |
+
"title": "High-resolution image synthesis with latent diffusion models.",
|
| 467 |
+
"author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.",
|
| 468 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684\u201310695, 2022.",
|
| 469 |
+
"url": null
|
| 470 |
+
}
|
| 471 |
+
},
|
| 472 |
+
{
|
| 473 |
+
"27": {
|
| 474 |
+
"title": "Photorealistic text-to-image diffusion models with deep language understanding.",
|
| 475 |
+
"author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.",
|
| 476 |
+
"venue": "In Advances in Neural Information Processing Systems (NIPS), pages 36479\u201336494, 2022.",
|
| 477 |
+
"url": null
|
| 478 |
+
}
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"28": {
|
| 482 |
+
"title": "Styleclipdraw: Coupling content and style in text-to-drawing synthesis.",
|
| 483 |
+
"author": "Peter Schaldenbrand, Zhixuan Liu, and Jean Oh.",
|
| 484 |
+
"venue": "arXiv preprint arXiv:2111.03133, 2022.",
|
| 485 |
+
"url": null
|
| 486 |
+
}
|
| 487 |
+
},
|
| 488 |
+
{
|
| 489 |
+
"29": {
|
| 490 |
+
"title": "Improved aesthetic predictor.",
|
| 491 |
+
"author": "Christoph Schuhmann.",
|
| 492 |
+
"venue": "https://github.com/christophschuhmann/improved-aesthetic-predictor, 2022.",
|
| 493 |
+
"url": null
|
| 494 |
+
}
|
| 495 |
+
},
|
| 496 |
+
{
|
| 497 |
+
"30": {
|
| 498 |
+
"title": "Clipgen: A deep generative model for clipart vectorization and synthesis.",
|
| 499 |
+
"author": "I-Chao Shen and Bing-Yu Chen.",
|
| 500 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, 28(12):4211\u20134224, 2022.",
|
| 501 |
+
"url": null
|
| 502 |
+
}
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"31": {
|
| 506 |
+
"title": "Deep unsupervised learning using nonequilibrium thermodynamics.",
|
| 507 |
+
"author": "Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.",
|
| 508 |
+
"venue": "In Proceedings of the International Conference on Machine Learning (ICML), pages 2256\u20132265, 2015.",
|
| 509 |
+
"url": null
|
| 510 |
+
}
|
| 511 |
+
},
|
| 512 |
+
{
|
| 513 |
+
"32": {
|
| 514 |
+
"title": "Denoising diffusion implicit models.",
|
| 515 |
+
"author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.",
|
| 516 |
+
"venue": "In International Conference on Learning Representations (ICLR), 2021a.",
|
| 517 |
+
"url": null
|
| 518 |
+
}
|
| 519 |
+
},
|
| 520 |
+
{
|
| 521 |
+
"33": {
|
| 522 |
+
"title": "Generative modeling by estimating gradients of the data distribution.",
|
| 523 |
+
"author": "Yang Song and Stefano Ermon.",
|
| 524 |
+
"venue": "In Advances in Neural Information Processing Systems (NIPS), 2019.",
|
| 525 |
+
"url": null
|
| 526 |
+
}
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"34": {
|
| 530 |
+
"title": "Clipfont: Text guided vector wordart generation.",
|
| 531 |
+
"author": "Yiren Song and Yuxuan Zhang.",
|
| 532 |
+
"venue": "In 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022, 2022.",
|
| 533 |
+
"url": null
|
| 534 |
+
}
|
| 535 |
+
},
|
| 536 |
+
{
|
| 537 |
+
"35": {
|
| 538 |
+
"title": "Score-based generative modeling through stochastic differential equations.",
|
| 539 |
+
"author": "Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole.",
|
| 540 |
+
"venue": "In International Conference on Learning Representations (ICLR), 2021b.",
|
| 541 |
+
"url": null
|
| 542 |
+
}
|
| 543 |
+
},
|
| 544 |
+
{
|
| 545 |
+
"36": {
|
| 546 |
+
"title": "Clipvg: Text-guided image manipulation using differentiable vector graphics.",
|
| 547 |
+
"author": "Yiren Song, Xuning Shao, Kang Chen, Weidong Zhang, Zhongliang Jing, and Minzhe Li.",
|
| 548 |
+
"venue": "In Proceedings of the Conference on Artificial Intelligence (AAAI), 2023.",
|
| 549 |
+
"url": null
|
| 550 |
+
}
|
| 551 |
+
},
|
| 552 |
+
{
|
| 553 |
+
"37": {
|
| 554 |
+
"title": "If by deepfloyd lab at stabilityai.",
|
| 555 |
+
"author": "StabilityAI.",
|
| 556 |
+
"venue": "https://github.com/deep-floyd/IF, 2023.",
|
| 557 |
+
"url": null
|
| 558 |
+
}
|
| 559 |
+
},
|
| 560 |
+
{
|
| 561 |
+
"38": {
|
| 562 |
+
"title": "Marvel: Raster gray-level manga vectorization via primitive-wise deep reinforcement learning.",
|
| 563 |
+
"author": "Hao Su, Xuefeng Liu, Jianwei Niu, Jiahe Cui, Ji Wan, Xinghao Wu, and Nana Wang.",
|
| 564 |
+
"venue": "IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT), 2023.",
|
| 565 |
+
"url": null
|
| 566 |
+
}
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"39": {
|
| 570 |
+
"title": "Modern evolution strategies for creativity: Fitting concrete images and abstract concepts.",
|
| 571 |
+
"author": "Yingtao Tian and David Ha.",
|
| 572 |
+
"venue": "In Artificial Intelligence in Music, Sound, Art and Design, pages 275\u2013291. Springer, 2022.",
|
| 573 |
+
"url": null
|
| 574 |
+
}
|
| 575 |
+
},
|
| 576 |
+
{
|
| 577 |
+
"40": {
|
| 578 |
+
"title": "Clipasso: Semantically-aware object sketching.",
|
| 579 |
+
"author": "Yael Vinker, Ehsan Pajouheshgar, Jessica Y Bo, Roman Christian Bachmann, Amit Haim Bermano, Daniel Cohen-Or, Amir Zamir, and Ariel Shamir.",
|
| 580 |
+
"venue": "ACM Transactions on Graphics (TOG), 41(4):1\u201311, 2022.",
|
| 581 |
+
"url": null
|
| 582 |
+
}
|
| 583 |
+
},
|
| 584 |
+
{
|
| 585 |
+
"41": {
|
| 586 |
+
"title": "Clipascene: Scene sketching with different types and levels of abstraction.",
|
| 587 |
+
"author": "Yael Vinker, Yuval Alaluf, Daniel Cohen-Or, and Ariel Shamir.",
|
| 588 |
+
"venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4146\u20134156, 2023.",
|
| 589 |
+
"url": null
|
| 590 |
+
}
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"42": {
|
| 594 |
+
"title": "Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation.",
|
| 595 |
+
"author": "Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A. Yeh, and Greg Shakhnarovich.",
|
| 596 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12619\u201312629, 2023a.",
|
| 597 |
+
"url": null
|
| 598 |
+
}
|
| 599 |
+
},
|
| 600 |
+
{
|
| 601 |
+
"43": {
|
| 602 |
+
"title": "Deepvecfont: Synthesizing high-quality vector fonts via dual-modality learning.",
|
| 603 |
+
"author": "Yizhi Wang and Zhouhui Lian.",
|
| 604 |
+
"venue": "ACM Transactions on Graphics (TOG), 40(6), 2021.",
|
| 605 |
+
"url": null
|
| 606 |
+
}
|
| 607 |
+
},
|
| 608 |
+
{
|
| 609 |
+
"44": {
|
| 610 |
+
"title": "Aesthetic text logo synthesis via content-aware layout inferring.",
|
| 611 |
+
"author": "Yizhi Wang, Gu Pu, Wenhan Luo, Pengfei Wang, Yexin ans Xiong, Hongwen Kang, Zhonghao Wang, and Zhouhui Lian.",
|
| 612 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.",
|
| 613 |
+
"url": null
|
| 614 |
+
}
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"45": {
|
| 618 |
+
"title": "Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation.",
|
| 619 |
+
"author": "Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu.",
|
| 620 |
+
"venue": "arXiv preprint arXiv:2305.16213, 2023b.",
|
| 621 |
+
"url": null
|
| 622 |
+
}
|
| 623 |
+
},
|
| 624 |
+
{
|
| 625 |
+
"46": {
|
| 626 |
+
"title": "Iconshop: Text-based vector icon synthesis with autoregressive transformers.",
|
| 627 |
+
"author": "Ronghuan Wu, Wanchao Su, Kede Ma, and Jing Liao.",
|
| 628 |
+
"venue": "arXiv preprint arXiv:2304.14400, 2023a.",
|
| 629 |
+
"url": null
|
| 630 |
+
}
|
| 631 |
+
},
|
| 632 |
+
{
|
| 633 |
+
"47": {
|
| 634 |
+
"title": "Human preference score: Better aligning text-to-image models with human preference.",
|
| 635 |
+
"author": "Xiaoshi Wu, Keqiang Sun, Feng Zhu, Rui Zhao, and Hongsheng Li.",
|
| 636 |
+
"venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2096\u20132105, 2023b.",
|
| 637 |
+
"url": null
|
| 638 |
+
}
|
| 639 |
+
},
|
| 640 |
+
{
|
| 641 |
+
"48": {
|
| 642 |
+
"title": "Diffsketcher: Text guided vector sketch synthesis through latent diffusion models.",
|
| 643 |
+
"author": "Ximing Xing, Chuang Wang, Haitao Zhou, Jing Zhang, Qian Yu, and Dong Xu.",
|
| 644 |
+
"venue": "In Advances in Neural Information Processing Systems (NIPS), 2023.",
|
| 645 |
+
"url": null
|
| 646 |
+
}
|
| 647 |
+
},
|
| 648 |
+
{
|
| 649 |
+
"49": {
|
| 650 |
+
"title": "Imagereward: Learning and evaluating human preferences for text-to-image generation, 2023.",
|
| 651 |
+
"author": "Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong.",
|
| 652 |
+
"venue": null,
|
| 653 |
+
"url": null
|
| 654 |
+
}
|
| 655 |
+
},
|
| 656 |
+
{
|
| 657 |
+
"50": {
|
| 658 |
+
"title": "Glyphcontrol: Glyph conditional control for visual text generation.",
|
| 659 |
+
"author": "Yukang Yang, Dongnan Gui, Yuhui Yuan, Haisong Ding, Han Hu, and Kai Chen.",
|
| 660 |
+
"venue": "2023.",
|
| 661 |
+
"url": null
|
| 662 |
+
}
|
| 663 |
+
}
|
| 664 |
+
],
|
| 665 |
+
"url": "http://arxiv.org/html/2312.16476v6"
|
| 666 |
+
}
|
20241217/2401.15713v3.json
ADDED
|
@@ -0,0 +1,537 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Contrastive Learning and Mixture of Experts Enables Precise Vector Embeddings",
|
| 3 |
+
"abstract": "The advancement of transformer neural networks has significantly elevated the capabilities of sentence similarity models, but they still struggle with highly discriminative tasks and may produce sub-optimal representations of important documents like scientific literature. With the increased reliance on retrieval augmentation and search, representing diverse documents as concise and descriptive vectors is crucial. This paper improves upon the vectors embeddings of scientific text by assembling niche datasets using co-citations as a similarity metric, focusing on biomedical domains. We apply a novel Mixture of Experts (MoE) extension pipeline to pretrained BERT models, where every multi-layer perceptron section is enlarged and copied into multiple distinct experts. Our MoE variants perform well over scientific domains with dedicated experts, whereas standard BERT models excel in only one domain at a time. Notably, extending just a single transformer block to MoE captures 85% of the benefit seen from full MoE extension at every layer. This holds promise for versatile and efficient One-Size-Fits-All transformer networks for numerically representing diverse inputs. Our methodology marks advancements in representation learning and holds promise for enhancing vector database search and compilation.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The remarkable success of transformer-based large language models (LLMs) since 2017 [1 ###reference_b1###] has significantly increased our confidence in their abilities and outputs. Nowadays, LLMs are treated as a de facto knowledge base for many and have been adopted on a mass scale since the release of services like ChatGPT and open-source counterparts like Llama and Mistral [2 ###reference_b2###, 3 ###reference_b3###]. However, despite their widespread use, challenges persist, particularly regarding the accuracy and reliability of these models. Common issues like LLM hallucinations [4 ###reference_b4###, 5 ###reference_b5###] highlight the ongoing need for improvement. The ability to generate reliable vector embeddings and perform precise classification is crucial, especially for technologies that rely on information retrieval and web search.\nOne approach to further curate transformer latent spaces is to utilize contrastive learning to create sentence similarity models, initially revolutionizing sentiment analysis with broader applications in vector search [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. However, even sentence similarity models miss out-of-distribution domain-specific nuances [9 ###reference_b9###, 10 ###reference_b10###], resulting in sub-optimal representations of many important documents, including scientific literature.\nFortunately, several advancements have paved the way toward effective sentence similarity models over an arbitrary number of domains. Work from the metascience community has introduced co-citation networks as an easy way to gather many similar papers [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. While this degree of similarity may not be perfect, co-citations have been shown to imply a high degree of similarity between papers [18 ###reference_b18###]. Another promising advancement comes from the deep learning community with Mixture of Experts (MoE) models. Their learned input-dependent routing of information is a promising multidomain / multitask learning architecture without significant added overhead [20 ###reference_b20###]. Taking advantage of these methods, we propose the following framework to build discriminative vector representations of scientific papers from abstracts alone:\nDomain-Specific Fine-Tuning: Application of contrastive fine-tuning methods to pretrained BERT (Bidirectional Encoder Representation Transformers) models utilizing co-citations as a similarity heuristic, tailoring them to learn and understand specific scientific domains.\nUniversal Applicability through Mixture of Experts (MoE): Introduction of a scalable method of seeding MoE models for fine-tuning pretrained BERT models across multiple domains, aiming for a versatile, \u201cOne-Size-Fits-All\u201d model.\nIn this paper, we enhance the precision and reliability of LLMs in identifying similar or niche intradisciplinary texts, to build scalable methods that can enhance LLMs to produce effective vector representations from a large variety of scientific literature. Our methods vastly outperform general pretrained models, fine-tuned sentence similarity models, and even science-oriented BERT models. Notably, our MoE variants, equipped with experts, achieve the efficacy of individual models, suggesting One-Size-Fits-All transformer networks are possible for certain tasks. Such models have far-reaching implications for information retrieval, web search, and other applications that rely on precise text classification and vector embeddings."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Methods",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Data Compilation",
|
| 21 |
+
"text": "We used co-citation networks to generate sufficiently large and accurate training datasets. Co-citations represent instances where two papers are cited together in a third paper. This strategy enabled the production of large training datasets from small amounts of data due to their nonlinear nature. For a dataset of 10,000 individual papers, for example, over 125,000 co-citation pairs can be produced. While this measurement of similarity is not perfect, co-citations have generally been shown to imply a high degree of similarity between papers [18 ###reference_b18###].\nGiven the technical subfields and the sparse overlap between them, we chose to use the cardiovascular disease (CVD) and chronic obstructive pulmonary disease (COPD) subfields of the biomedical sciences as case studies for our approach. The CVD and COPD subfields represent two domains that contrast significantly in co-citation network size, which allowed us to compare the performance of our approach as it relates to data availability. Since 2010, around 290,000 articles relating to CVD have been published on PubMed Central, while just 14,000 articles relating to COPD were published. We queried papers using Medical Subject Heading (MeSH) terms for both CVD and COPD, specifically for open-access papers with at least one citation and an abstract. We queried specifically for papers published between 2017 - 2022 for CVD and papers published between 2010 - 2022 for COPD. A longer time range for COPD was used as COPD is a smaller sub-field, and a large enough dataset could not be created by querying from 2017 onward. In total, we queried 99,490 papers for our CVD dataset and 10,051 papers for our COPD dataset.\nWe constructed a test dataset for both sub-fields to prototype our framework on very recent papers, with the goal of applications involving reading list curation for researchers. These were constructed by taking all similar abstract pairs in the training dataset, where at least one paper was published in 2022, the most recent year in our dataset.\nOur validation dataset was constructed from the remaining training dataset. The remaining training dataset was split randomly in a 99:1 ratio without duplicates, with the larger of the new datasets being used as our final training dataset and the smaller of the two being used as the initial validation dataset. On top of this initial validation dataset of similar abstract pairs, an equal amount of dissimilar abstract pairs were added. We generated pairs of dissimilar papers by compiling pairs of papers that had never been co-cited together. While it is impossible to guarantee any two non-co-cited papers will not be cited together in the future we minimized this possibility by requiring papers to be cited individually at least 15 times. While we produced pairs of dissimilar papers for model evaluation, the production of dissimilar paper pairs is not necessary for model training, which we discuss in our objective formulation below.\nAfter constructing the validation and test datasets, we accounted for co-citation frequency in our training dataset by duplicating co-citation pairs that had been co-cited multiple times in the training dataset. If two papers had been co-cited together five times, for example, this duplication would result in this pair of papers occurring five times in our dataset. This duplication allowed us to weigh pairs that had been co-cited more frequently more heavily than pairs that had been co-cited less frequently.\nTo further diversify the biomedical data available, we applied the same data compilation pipeline to additional sub-fields involving parasitic diseases, skin cancer, and autoimmune diseases. We prototyped our models using separate experiments with only CVD and COPD and then trained them fully on all five domains."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Transformer Neural Networks",
|
| 27 |
+
"text": "The transformer architecture is adept at sequential processing and is SOTA for NLP tasks [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###]. A transformer block comprised a self-attention layer and multi-layer perception (MLP) interleaved with skip connections. Full transformers were made of transformer blocks stacked together [1 ###reference_b1###].\nPrior to the transformer blocks is the token embedding process, where tokenization maps an input string of language into a list of integers from a dictionary. These integers served as the indices for a matrix , where each row was a learnable representative vector for that token, making where was the total number of unique tokens in the vocabulary and an arbitrarily chosen hidden dimension. The initial embedding was .\nEach block in the transformer then transforms this embedding, i.e., the transformer block maps the embedding to [28 ###reference_b28###, 1 ###reference_b1###, 29 ###reference_b29###]; is the last hidden state of the network. The first part of this map is self-attention, which mixes information across the vectors, followed by the MLP which mixes information across [30 ###reference_b30###, 28 ###reference_b28###].\nIncluding the MLP, the entire transformer block can be written as\nwhere and are biases associated with learned linear transformations and , where . The activation function , e.g., ReLU or GeLU, introduces non-linearity [1 ###reference_b1###].\nGPT (Generative Pretrained Transformers) models or causal models, like OpenAI\u2019s GPT series (GPT-2, GPT-3, etc.), focus on generative tasks and use a variant called transformer decoders [31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###]. They use unidirectional attention when processing text. This means they can predict the next word in a sentence but cannot modify their understanding based on words that come later in the text. BERT models or transformer encoders utilize bidirectional attention, capture more context and word relationships, and are better suited for tasks like text classification and sentence similarity [34 ###reference_b34###]."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.2.1",
|
| 31 |
+
"parent_section_id": "2.2",
|
| 32 |
+
"section_name": "2.2.1 Mixture of Experts",
|
| 33 |
+
"text": "Mixture of Experts (MoE) models add a linear layer or router network to each transformer block, which outputs logits from . These logits route to multiple equivalent copies of the MLP section with different weights called experts [35 ###reference_b35###]. In many transformer variants, this routing is typically done on a per-token basis, allowing for experts to specify in language classes like punctuation, nouns, numbers, etc [36 ###reference_b36###]. We chose sentence-wise routing of the entire so that we could purposely structure our experts for specific domains [37 ###reference_b37###]. While there are many ways to route MoE networks, two main approaches involve calling one expert per block or the top experts per block. We experiment with both approaches.\nControlling the routing of , allowed for a one-size-fits-all approach to text classification where one expert per transformer layer was an expert in a specific domain. To control this routing, we added special tokens for each domain, like [CVD] and [COPD], and replaced the standard [CLS] token with these upon tokenization. An additional cross-entropy loss was added that compared the router logits to the correct domain identity.\nFor faster fine-tuning, we utilized pretrained models for this novel MoE extension approach. Our MoE extension took the MLP sections of pretrained transformers and copied them into experts with randomly initialized routers in each transformer block. In this process, an additional linear layer and bias was added with element-wise multiplication to the MLP (SwiGLU activation) which has been shown to perform better than vanilla activation functions [36 ###reference_b36###].\nWe initialized with zeros and with ones to make the initial forward passes equivalent to the pretrained model and would only be modified during further training.\nEnforced routing refers to manual indexing of chosen experts, which we found worked just as well as an additional cross-entropy loss on the router logits. We chose enforced routing for the dual CVD / COPD experiments as a proof of concept (we also did not add the additional router linear layer). However, for the experiments over all five biomedical domains, we implemented the mutual information loss suggested in [38 ###reference_b38###] to further leverage overlapping similarities across the gathered biomedical domains. This way, we could correlate expert activation with certain domains without direct enforcement and concatenate the top-2 expert results at each layer. Additional local experiments showed that token-wise routing results in slightly higher-end performance, even on sentence-level tasks. Thus, we use the top-2 experts per token for our final five-domain model. Because our MoE extension can be costly in terms of VRAM, we also tried an MoE extension approach with a single transformer block in the middle instead of extending all 12 - hypothesizing that much of the multidomain benefit could be achieved for a small amount of extended MoE layers."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.2.2",
|
| 37 |
+
"parent_section_id": "2.2",
|
| 38 |
+
"section_name": "2.2.2 Models of choice",
|
| 39 |
+
"text": "We chose two differently sized models to prototype our MoE extension. 1.) all-MiniLM-L6-v2 model (Mini) [39 ###reference_b39###, 40 ###reference_b40###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###], a popular sentence transformer model trained on 1+ billion sentence pairs that is highly effective for sentiment analysis and text classification. Mini is a standard BERT-like model with a hidden size of 384, an intermediate size of 1536, 12 attention heads, and six hidden layers for a total of 23 million parameters. 2.) SciBERT is a general-purpose BERT model fine-tuned on entire papers from the Semantic Scholar with over 1.14 million diverse papers [45 ###reference_b45###]. SciBERT is SOTA for many domain-specific relation extraction and sentence classification tasks with a hidden size of 786, an intermediate size of 3072, 12 attention heads, and 12 hidden layers for a total of 110 million parameters. Compared to many newer BERT architectures with up to (or more than) 1 billion parameters, these modest model sizes allowed for effective fine-tuning given minimal training data and reduced computational cost during training iterations.\nMoE versions for the CVD / COPD experiments have a larger parameter count, 30 million and 167 million, respectively, due to two experts (two domains) per layer. However, the effective parameter count is the same as the original because only one expert is called at a time, resulting in fast training and inference. The full domain SciBERT MoE extension versions have five experts and utilize two experts per token for a total of 167 million effective parameters.\nOur fine-tuned models were compared to a basic term frequency-inverse document frequency (TF-IDF) model alongside the following popular base models without any fine-tuning: Mini, BERT, Mpnet [46 ###reference_b46###], Declustr [47 ###reference_b47###], SciBERT, BiomedBERT [48 ###reference_b48###], and ClincalBERT [49 ###reference_b49###]. This large model variety in pretraining strategy allows for a more general comparison of how effective our fine-tuning framework is.\nThe TF-IDF model, representing the most basic and straightforward sentence similarity model, acts as a baseline for expected performance. Finally, we also prompted GPT3.5 with sets of abstracts to have them assess qualitatively if the papers were similar or not."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "Training Strategy",
|
| 45 |
+
"text": "To minimize training time, we chose to use abstracts rather than entire papers as the text input to the model. Abstracts represent a human-generated summarized version of a paper and, as a result, include much of the relevant textual information contained in a paper. We fine-tuned Mini and SciBERT on the individual datasets for CVD and COPD, and SciBERT for all five domains. Models without MoE extension are referred to as Single Expert (SE).\nTo train our models, we fed co-cited abstracts from the CVD or COPD domain to the model independently, extracting the vector embedding from the pooler output for each abstract. The pooler output was generated by a small neural network from the [CLS] token (our special domain tokens in our case) embedding from the last hidden state , a standard practice when using the Huggingface transformers package. These vector embeddings were compared with a variant of the Multiple Negative Rankings (MNR) loss used to train cdsBERT [50 ###reference_b50###, 51 ###reference_b51###]. MNR Loss is a loss function that has seen significant success with sentence embedding problems [52 ###reference_b52###] and was highly successful in our local experiments. Our variant used dot products as a similarity heuristic and scaled the similarity by a learned temperature parameter. Furthermore, MNR loss only requires positive/similar text pairs, generating negative/dissimilar text pairs from the other positive pairs in a given mini-batch. As a result, MNR loss removed the need to generate dissimilar text pairs for our training dataset under the assumption that the random chance of finding a similar paper randomly with a batch size of 20 is sufficiently small. During training, we randomly switched the order of the two input abstract pairs to prevent any bias in how they are fed to the loss function.\nWe performed hyperparameter optimization on the CVD dataset, using random search to approximate the best batch size alongside warmup steps and total training length that optimized the model\u2019s F1 score on the validation dataset. Due to resource constraints, we decided to use a smaller dataset than the actual training dataset, with the hyperparameter optimization dataset being a random 10% sample of the training dataset. Resource constraints also limited our tested range for each hyperparameter. After trying batch sizes ranging from 5-20, we found that 20 supported the best model performance. This offered a large enough batch size to require nuanced understanding but not too large to hinder model training, as the contrastive loss chosen was significantly more challenging to minimize as the batch size increased. For our final training runs, we utilized a learning rate of , a one-cycle learning rate scheduler [53 ###reference_b53###] with 500 warmup steps, and periodic validation every 5000 steps. For the final five-domain model, a cosine learning rate scheduler was utilized instead. Training was conducted for ten epochs and halted early when a patience of three was exceeded for the validation . The best model was loaded at the end. SciBERT training was conducted on a single A100 GPU, Mini training was done on a single A10 GPU, and all training runs took less than 24 hours."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.4",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "Evaluation Strategy",
|
| 51 |
+
"text": "All models were evaluated on the evaluation sets separately for each domain. We utilized cosine similarity between two vectors extracted from an abstract pair to classify the abstracts as co-cited (similar) or not, given a threshold. Cosine similarity is a common vector similarity measure ranging from -1 to 1, where -1 is exactly opposite and 1 occurs for a pair of the same vector. This binary threshold is determined by an F1 variant called [54 ###reference_b54###, 55 ###reference_b55###]. is the maximum F1 score calculated for all possible thresholds for a reported metric. While typically used for imbalanced multilabel classification, randomly choosing a binary threshold for the reported F1 would not be a fair comparison of different models. For example, one model may perform much better with a cosine similarity threshold at 0.5 for abstract text similarity compared to 0.4, or vice versa. We also reported average distance and accuracy. The average distance was calculated by taking the absolute value of the difference between the similarity score between two abstract pairs and their label, 0 or 1. For example, if a model gave two co-cited (similar) abstracts a cosine similarity of 0.75, the distance would be = 0.25. Full details for conversion of any two abstract pairs into binary similar and dissimilar metrics are summarized in Figure 1 ###reference_###. Because the test dataset contains no negative examples, we limited the threshold search between 0.5 and 1 to prevent the trivial -1 threshold that always leads to and reported the accuracy using the found threshold.\n###figure_1###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Results",
|
| 57 |
+
"text": "The performance of our models on the validation and test datasets for CVD (Table 1 ###reference_###) and COPD (Table 2 ###reference_###) were summarized and compared to other leading sentence similarity models, as well as TF-IDF. We also evaluated the performance of GPT-3.5 on our generated validation and test datasets. GPT-3.5 represented an example of an LLM with a narrow representation of similar (Figure 2 ###reference_###). GPT-3.5 regarded almost all examples that were input as dissimilar when asked whether the input examples were similar due to (correctly) identified minor differences between each input text. Given this, GPT-3.5 performance was excluded from model comparisons. We also do not show results from the MoE version of Mini as they did not score better than random on either validation dataset (0.67 ).\n###figure_2### ###table_1### ###table_2### The results in Tables 1 ###reference_### and 2 ###reference_### highlight the effectiveness of our fine-tuning strategy, as our models demonstrated a pronounced proficiency in identifying similar or dissimilar papers within highly specific domains. On the CVD dataset, our models achieved superior and accuracy scores compared to all base models. Particularly, our SciBert variants exhibited a near-perfect 0.97 . While the and accuracy scores were lower on the COPD dataset, our models performed better than the base models for the COPD dataset. All our models surpassed every base model evaluated on the COPD datasets by at least 10% in both and accuracy scores. This performance highlights the capability of our approach to yield high-quality results even with limited training data.\nWe moved beyond enforced routing and utilized a mutual information loss to train SciBERT MoE extended models across CVD, COPD, skin cancer, autoimmune disease, and parasitic disease domains (Table 3 ###reference_###). As a baseline, we trained a model SE-All, which is trained and evaluated on all five domains without MoE extension - effectively fine-tuned SciBERT. Similarly to our initial studies, the MoE extended models performed almost equivalently to five independently trained models together on average, with vs. , respectively. The single MoE extended (SMoE) captured 85% of the added multi-domain proficiency over SE-All with vs. average . This ability points to the robustness and versatility of the multidomain / multitask MoE approach through various routing and extension strategies."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Discussion",
|
| 63 |
+
"text": "Our work advances the use of transformer language models by focusing on improving their domain-specific understanding and document-wide comprehension. We have shown that common pretrained models, including ChatGPT, cannot distinguish the differences in highly discriminative text inputs. Presumably, this phenotype in GPT-like LLMs is primarily caused by standard prompt construction and formatting. By including multiple distinct documents in a single prompt, the semantic similarity of similar tokens will prevent effective distinction between the documents, as portions of each document will attend highly to each other even if they are \u201cdifferent\u201d as defined by a desired discrimination. With pretrained BERT-like model usage, presumably this poor performance comes from out-of-distribution tasks and a low ability to generalize with high discrimination. The sentence similarity approach offers a more effective summarization technique by inputting documents separately and allowing the model to construct descriptive numerical representations through contrastive objectives. As vector databases become increasingly prevalent for search and retrieval, the quality of these numerical representations becomes increasingly important. Our innovative framework, which incorporates contrastive learning, custom MNR and router losses, novel special tokens, MoE seeding, and extension, as well as various routing techniques, significantly enhances sentence similarity metrics compared to pretrained transformers. We leveraged co-citation networks to construct large datasets of similar abstracts and applied our framework to scientific literature, creating nuanced representations for discriminative subfields of biomedical domains.\nWithout applying a threshold search for the F1 scores, all base pretrained models tested perform with random chance or worse when tasked with classifying papers as co-cited or not based on cosine similarity. By searching all possible cosine similarity thresholds for , pretrained models get a boost for fair comparison while creating a scenario that is unrealistic for actual inference, as a consistent similarity threshold is needed for actual use. Once compared fairly, our fine-tuned models performed better than their original counterparts. More specifically, our SciBERT SE and MoE variants performed equally on the CVD validation set with an of 0.97, an average distance of 0.48, and an accuracy of 0.97. The Mini SE variant performed similarly with a of 0.94, an average distance of 0.23, and an accuracy of 0.93. On the COPD validation set, Mini SE performed best with a of 0.83, an average distance of 0.33, and an accuracy of 0.82. Following that is SciBERT SE with 0.81 and SciBERT MoE with 0.80. Both SciBERT variants resulted in an average distance of 0.49 and accuracies of 0.80 and 0.79, respectively. Importantly, our SciBERT variants consistently performed optimally with a high cosine similarity, implying that a standardized threshold near 0.98 or 0.99 could be utilized during inference. Further penalization terms on the average similarity between batches during training could enforce this threshold lower for different applications.\nThe test sets require closer examination due to the lack of negative examples, constrained by the inclusion of newer papers and a lack of literature cited 15+ times since 2022. To accommodate this, we withheld from the tables since there is a trivial cutoff of -1 to get 1.00 . Instead, we limited the threshold search between 0.5 and 1.0 and reported the accuracy using the found threshold, providing a nontrivial representation of performance. Overall, we found that SciBERT variants demonstrated more precise vector representations with 1.00 accuracy across the board. In contrast, Mini SE variants had lower accuracies of 0.97 for CVD and 0.56 for COPD. Surprisingly, the Mini SE CVD variant performed better than the Mini SE COPD variant on the COPD test data. This suggests that the cosine similarity threshold limit of 0.5 may have artificially hindered the metrics evaluated. The average distance metric offers additional context for test dataset performance. SciBERT variants excelled at placing similar abstract vector representations close in space with an average distance of 0.01 for CVD and COPD test sets. Conversely, the smaller Mini SE had a high distance even compared to base models. Notably, the BiomedBERT base model also had an average test set distance of 0.01 on both test sets, which is unsurprising given the possibility of training data overlap. Despite MoE engineering, our attempts with a Mini MoE variant were less successful, suggesting a minimum size requirement in the base BERT model for effective performance. This may be due to the need for sufficiently capable shared attention layers that can generalize to support diverse experts and domains.\nImportantly, our MoE approach across all domains performs similarly to our individual-domain SE models. Additionally, the MoE seeding is fully scalable, appearing to enable experts and new special tokens given different datasets. This is further supported by our experiments shown in Table 3 ###reference_###, where MoE models with five experts perform well on five domains, even with a single MoE layer. The substantial improvement in performance achieved by adding a single MoE layer highlights a remarkable benefit with little added computational cost. This is particularly promising for using our single-layer MoE extension with large pretrained models to train for multitask / multidomain tasks. Future experiments may find the optimal ratio of experts per domain alongside the correct discrimination of \u201cdomain\u201d to create a one-size-fits-all vector embedding model at the scale of Semantic Scholar.\nOur use of co-citation networks enables rapid and efficient dataset compilation for training transformers in niche scientific domains. Fine-tuning of base BERT models through contrastive learning with an MNR-inspired loss significantly improves sentence similarity. The MoE approach further expands these capabilities, suggesting the feasibility of a universal model for text classification and vector embeddings across various domains through MoE seeding and enforced routing. Using these new models, effective MoE BERT models with specialized knowledge across multiple fields, vocabularies, or tasks can be developed."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {
|
| 68 |
+
"1": {
|
| 69 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.5\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_t\" id=\"S3.T1.5.5.6\" style=\"padding-bottom:2.15277pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.5.6.1\">CVD Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.5.7\" style=\"padding-bottom:2.15277pt;\">Cutoff</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.1\" style=\"padding-bottom:2.15277pt;\">F1Max\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.2.2\" style=\"padding-bottom:2.15277pt;\">Dist.\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.3.3\" style=\"padding-bottom:2.15277pt;\">Acc.\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.4.4.4\" style=\"padding-bottom:2.15277pt;\">Dist.\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S3.T1.5.5.5\" style=\"padding-bottom:2.15277pt;\">Acc.\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.6.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"7\" id=\"S3.T1.5.6.1.1\">Our models</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.7.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T1.5.7.2.1\">Mini-SE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.7.2.2\">0.51</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.7.2.3\">0.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.7.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.7.2.4.1\">0.23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.7.2.5\">0.93</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.7.2.6\">0.19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T1.5.7.2.7\">0.97</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.8.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T1.5.8.3.1\">SciBERT-SE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.3.2\">0.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.8.3.3.1\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.3.4\">0.48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.8.3.5.1\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.8.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.8.3.6.1\">0.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T1.5.8.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.8.3.7.1\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.9.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T1.5.9.4.1\">SciBERT-MoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.4.2\">0.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.4.3\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T1.5.9.4.3.1\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.4.4\">0.48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.4.5\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T1.5.9.4.5.1\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.9.4.6\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T1.5.9.4.6.1\">0.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T1.5.9.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.9.4.7.1\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.10.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"7\" id=\"S3.T1.5.10.5.1\">Base models</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.11.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T1.5.11.6.1\">TF-IDF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.11.6.2\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.11.6.3\">0.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.11.6.4\">0.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.11.6.5\">0.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.11.6.6\">0.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T1.5.11.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.11.6.7.1\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.12.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T1.5.12.7.1\">Mini</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.7.2\">0.46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.7.3\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.7.4\">0.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.7.5\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.12.7.6\">0.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T1.5.12.7.7\">0.95</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.13.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T1.5.13.8.1\">BERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.8.2\">0.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.8.3\">0.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.8.4\">0.48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.8.5\">0.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.13.8.6\">0.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T1.5.13.8.7\">0.89</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.14.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T1.5.14.9.1\">Mpnet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.9.2\">0.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.9.3\">0.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.9.4\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T1.5.14.9.4.1\">0.28</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.9.5\">0.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.14.9.6\">0.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T1.5.14.9.7\">0.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.15.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T1.5.15.10.1\">Declutr</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.15.10.2\">0.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.15.10.3\">0.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.15.10.4\">0.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.15.10.5\">0.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.15.10.6\">0.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T1.5.15.10.7\">0.89</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.16.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T1.5.16.11.1\">SciBERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.16.11.2\">0.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.16.11.3\">0.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.16.11.4\">0.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.16.11.5\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.16.11.6\">0.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T1.5.16.11.7\">0.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.17.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T1.5.17.12.1\">BiomedBERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.17.12.2\">0.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.17.12.3\">0.72</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.17.12.4\">0.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.17.12.5\">0.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.17.12.6\">0.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T1.5.17.12.7\">0.76</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.18.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_ll\" id=\"S3.T1.5.18.13.1\">ClinicalBERT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.5.18.13.2\">0.92</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.5.18.13.3\">0.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.5.18.13.4\">0.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.5.18.13.5\">0.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.5.18.13.6\">0.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S3.T1.5.18.13.7\">0.81</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Evaluation metrics of fine-tuned models compared to base models for the CVD datasets [<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.12.1\">bold</span> is best and <span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T1.13.2\">underlined</span> is second best]. Single expert (SE) and Mixture of Expert (MoE) models are compared, showcasing near-identical performance despite MoE models exhibiting mastery of multiple datasets. is not shown for the test set due to the trivial -1 threshold resulting in all models performing perfectly due to no negative data. Thus, a high describes performance for the validation data, and a low distance (Dist) is the important metric for performance on test data.</figcaption>\n</figure>",
|
| 70 |
+
"capture": "Table 1: Evaluation metrics of fine-tuned models compared to base models for the CVD datasets [bold is best and underlined is second best]. Single expert (SE) and Mixture of Expert (MoE) models are compared, showcasing near-identical performance despite MoE models exhibiting mastery of multiple datasets. is not shown for the test set due to the trivial -1 threshold resulting in all models performing perfectly due to no negative data. Thus, a high describes performance for the validation data, and a low distance (Dist) is the important metric for performance on test data."
|
| 71 |
+
},
|
| 72 |
+
"2": {
|
| 73 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T2.5\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_t\" id=\"S3.T2.5.5.6\" style=\"padding-bottom:2.15277pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.5.6.1\">COPD Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.5.7\" style=\"padding-bottom:2.15277pt;\">Cutoff</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.1\" style=\"padding-bottom:2.15277pt;\">F1Max\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.2\" style=\"padding-bottom:2.15277pt;\">Dist.\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.3\" style=\"padding-bottom:2.15277pt;\">Acc.\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.4.4\" style=\"padding-bottom:2.15277pt;\">Dist.\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S3.T2.5.5.5\" style=\"padding-bottom:2.15277pt;\">Acc.\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.6.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"7\" id=\"S3.T2.5.6.1.1\">Our models</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.7.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T2.5.7.2.1\">Mini-SE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.7.2.2\">0.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.7.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.7.2.3.1\">0.83</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.7.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.7.2.4.1\">0.33</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.7.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.7.2.5.1\">0.82</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.7.2.6\">0.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T2.5.7.2.7\">0.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.8.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T2.5.8.3.1\">SciBERT-SE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.8.3.2\">0.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.8.3.3\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T2.5.8.3.3.1\">0.81</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.8.3.4\">0.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.8.3.5\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T2.5.8.3.5.1\">0.80</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.8.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.8.3.6.1\">0.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T2.5.8.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.8.3.7.1\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.9.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T2.5.9.4.1\">SciBERT-MoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.9.4.2\">0.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.9.4.3\">0.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.9.4.4\">0.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.9.4.5\">0.79</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.9.4.6\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T2.5.9.4.6.1\">0.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T2.5.9.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.9.4.7.1\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.10.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"7\" id=\"S3.T2.5.10.5.1\">Base models</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.11.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T2.5.11.6.1\">TF-IDF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.11.6.2\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.11.6.3\">0.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.11.6.4\">0.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.11.6.5\">0.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.5.11.6.6\">0.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T2.5.11.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.11.6.7.1\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.12.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T2.5.12.7.1\">Mini</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.12.7.2\">0.48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.12.7.3\">0.69</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.12.7.4\">0.45</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.12.7.5\">0.61</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.12.7.6\">0.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T2.5.12.7.7\">0.76</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.13.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T2.5.13.8.1\">BERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.13.8.2\">0.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.13.8.3\">0.69</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.13.8.4\">0.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.13.8.5\">0.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.13.8.6\">0.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T2.5.13.8.7\">0.85</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.14.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T2.5.14.9.1\">Mpnet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.14.9.2\">0.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.14.9.3\">0.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.14.9.4\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T2.5.14.9.4.1\">0.45</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.14.9.5\">0.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.14.9.6\">0.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T2.5.14.9.7\">0.71</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.15.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T2.5.15.10.1\">Declutr</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.15.10.2\">0.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.15.10.3\">0.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.15.10.4\">0.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.15.10.5\">0.56</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.15.10.6\">0.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T2.5.15.10.7\">0.91</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.16.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T2.5.16.11.1\">SciBERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.16.11.2\">0.46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.16.11.3\">0.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.16.11.4\">0.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.16.11.5\">0.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.16.11.6\">0.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T2.5.16.11.7\">0.82</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.17.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T2.5.17.12.1\">BiomedBERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.17.12.2\">0.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.17.12.3\">0.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.17.12.4\">0.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.17.12.5\">0.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.17.12.6\">0.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T2.5.17.12.7\">0.97</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.18.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_ll\" id=\"S3.T2.5.18.13.1\">ClinicalBERT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.5.18.13.2\">0.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.5.18.13.3\">0.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.5.18.13.4\">0.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.5.18.13.5\">0.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.5.18.13.6\">0.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S3.T2.5.18.13.7\">0.80</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Evaluation metrics of fine-tuned models compared to base models for the COPD datasets [<span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.1\">bold</span> is best and <span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S3.T2.13.2\">underlined</span> is second best]. Single expert (SE) and Mixture of Expert (MoE) models are compared, showcasing near-identical performance despite MoE models exhibiting mastery of multiple datasets. is not shown for the test set due to the trivial -1 threshold resulting in all models performing perfectly due to no negative data.n Thus, a high describes performance for the validation data, and a low distance (Dist) is the important metric for performance on test data.</figcaption>\n</figure>",
|
| 74 |
+
"capture": "Table 2: Evaluation metrics of fine-tuned models compared to base models for the COPD datasets [bold is best and underlined is second best]. Single expert (SE) and Mixture of Expert (MoE) models are compared, showcasing near-identical performance despite MoE models exhibiting mastery of multiple datasets. is not shown for the test set due to the trivial -1 threshold resulting in all models performing perfectly due to no negative data.n Thus, a high describes performance for the validation data, and a low distance (Dist) is the important metric for performance on test data."
|
| 75 |
+
},
|
| 76 |
+
"3": {
|
| 77 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_ll ltx_border_t\" id=\"S3.T3.3.3.4\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.1.1.1\">Prec.\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.2.2.2\">Recall\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.3.3.3\">F1\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr ltx_border_t\" id=\"S3.T3.3.3.5\">Cutoff</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.4.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"5\" id=\"S3.T3.3.4.1.1\">CVD</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.3.5.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T3.3.5.1.1\">SE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.5.1.2\">0.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.5.1.3\">0.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.5.1.4\">0.95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T3.3.5.1.5\">1.00</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.6.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.6.2.1\">MoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.6.2.2\">0.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.6.2.3\">0.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.6.2.4\">0.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.6.2.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.7.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.7.3.1\">SMoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.7.3.2\">0.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.7.3.3\">0.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.7.3.4\">0.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.7.3.5\">0.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.8.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.8.4.1\">SE-All</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.8.4.2\">0.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.8.4.3\">1.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.8.4.4\">0.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.8.4.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.9.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"5\" id=\"S3.T3.3.9.5.1\">COPD</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.10.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T3.3.10.6.1\">SE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.10.6.2\">0.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.10.6.3\">0.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.10.6.4\">0.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T3.3.10.6.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.11.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.11.7.1\">MoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.11.7.2\">0.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.11.7.3\">0.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.11.7.4\">0.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.11.7.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.12.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.12.8.1\">SMoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.12.8.2\">0.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.12.8.3\">0.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.12.8.4\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.12.8.5\">0.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.13.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.13.9.1\">SE-All</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.13.9.2\">0.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.13.9.3\">0.95</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.13.9.4\">0.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.13.9.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.14.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"5\" id=\"S3.T3.3.14.10.1\">Skin Cancer</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.15.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T3.3.15.11.1\">SE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.15.11.2\">0.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.15.11.3\">0.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.15.11.4\">0.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T3.3.15.11.5\">0.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.16.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.16.12.1\">MoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.16.12.2\">0.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.16.12.3\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.16.12.4\">0.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.16.12.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.17.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.17.13.1\">SMoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.17.13.2\">0.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.17.13.3\">0.96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.17.13.4\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.17.13.5\">0.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.18.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.18.14.1\">SE-All</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.18.14.2\">0.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.18.14.3\">1.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.18.14.4\">0.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.18.14.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.19.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"5\" id=\"S3.T3.3.19.15.1\">Autoimmune Disease</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.20.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T3.3.20.16.1\">SE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.20.16.2\">0.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.20.16.3\">0.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.20.16.4\">0.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T3.3.20.16.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.21.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.21.17.1\">MoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.21.17.2\">0.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.21.17.3\">0.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.21.17.4\">0.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.21.17.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.22.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.22.18.1\">SMoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.22.18.2\">0.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.22.18.3\">0.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.22.18.4\">0.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.22.18.5\">0.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.23.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.23.19.1\">SE-All</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.23.19.2\">0.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.23.19.3\">1.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.23.19.4\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.23.19.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.24.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"5\" id=\"S3.T3.3.24.20.1\">Parasitic Disease</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.25.21\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T3.3.25.21.1\">SE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.25.21.2\">0.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.25.21.3\">0.93</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.25.21.4\">0.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T3.3.25.21.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.26.22\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.26.22.1\">MoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.26.22.2\">0.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.26.22.3\">0.93</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.26.22.4\">0.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.26.22.5\">0.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.27.23\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.27.23.1\">SMoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.27.23.2\">0.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.27.23.3\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.27.23.4\">0.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.27.23.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.28.24\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.28.24.1\">SE-All</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.28.24.2\">0.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.28.24.3\">0.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.28.24.4\">0.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.28.24.5\">1.00</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.29.25\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_rr ltx_border_tt\" colspan=\"5\" id=\"S3.T3.3.29.25.1\">Average</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.30.26\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_tt\" id=\"S3.T3.3.30.26.1\">SE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.30.26.2\">0.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.30.26.3\">0.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.3.30.26.4\">0.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S3.T3.3.30.26.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.31.27\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.31.27.1\">MoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.31.27.2\">0.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.31.27.3\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.31.27.4\">0.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.31.27.5\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.32.28\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll\" id=\"S3.T3.3.32.28.1\">SMoE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.32.28.2\">0.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.32.28.3\">0.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.3.32.28.4\">0.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S3.T3.3.32.28.5\">0.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.3.33.29\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_ll\" id=\"S3.T3.3.33.29.1\">SE-All</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T3.3.33.29.2\">0.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T3.3.33.29.3\">0.93</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T3.3.33.29.4\">0.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S3.T3.3.33.29.5\">0.99</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Validation metrics across five gathered biomedical domains. SE is trained and evaluated on each domain independently. MoE, SMoE, and SE-All are trained and evaluated on all domains.</figcaption>\n</figure>",
|
| 78 |
+
"capture": "Table 3: Validation metrics across five gathered biomedical domains. SE is trained and evaluated on each domain independently. MoE, SMoE, and SE-All are trained and evaluated on all domains."
|
| 79 |
+
}
|
| 80 |
+
},
|
| 81 |
+
"image_paths": {
|
| 82 |
+
"1": {
|
| 83 |
+
"figure_path": "2401.15713v3_figure_1.png",
|
| 84 |
+
"caption": "Figure 1: Method for determination of abstract pair similarity for model evaluation.",
|
| 85 |
+
"url": "http://arxiv.org/html/2401.15713v3/extracted/6077398/APToScore.png"
|
| 86 |
+
},
|
| 87 |
+
"2": {
|
| 88 |
+
"figure_path": "2401.15713v3_figure_2.png",
|
| 89 |
+
"caption": "Figure 2: A typical ChatGPT response to a set of similar papers, qualitatively classifying all similar papers as dissimilar.",
|
| 90 |
+
"url": "http://arxiv.org/html/2401.15713v3/extracted/6077398/GPTResponse.png"
|
| 91 |
+
}
|
| 92 |
+
},
|
| 93 |
+
"validation": true,
|
| 94 |
+
"references": [
|
| 95 |
+
{
|
| 96 |
+
"1": {
|
| 97 |
+
"title": "Attention is all you need.",
|
| 98 |
+
"author": "Vaswani, A. et al.",
|
| 99 |
+
"venue": "In Guyon, I. et al. (eds.) Advances in Neural Information Processing Systems, vol. 30 (Curran Associates, Inc., 2017).",
|
| 100 |
+
"url": null
|
| 101 |
+
}
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"2": {
|
| 105 |
+
"title": "Llama 2: Open foundation and fine-tuned chat models, DOI: 10.48550/arXiv.2307.09288.",
|
| 106 |
+
"author": "Touvron, H. et al.",
|
| 107 |
+
"venue": "2307.09288[cs].",
|
| 108 |
+
"url": null
|
| 109 |
+
}
|
| 110 |
+
},
|
| 111 |
+
{
|
| 112 |
+
"3": {
|
| 113 |
+
"title": "Mistral 7b, DOI: 10.48550/arXiv.2310.06825.",
|
| 114 |
+
"author": "Jiang, A. Q. et al.",
|
| 115 |
+
"venue": "2310.06825[cs].",
|
| 116 |
+
"url": null
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"4": {
|
| 121 |
+
"title": "Shortcutted commonsense: Data spuriousness in deep learning of commonsense reasoning.",
|
| 122 |
+
"author": "Branco, R., Branco, A., Ant\u00f3nio Rodrigues, J. & Silva, J. R.",
|
| 123 |
+
"venue": "In Moens, M.-F., Huang, X., Specia, L. & Yih, S. W.-t. (eds.) Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 1504\u20131521, DOI: 10.18653/v1/2021.emnlp-main.113 (Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 2021).",
|
| 124 |
+
"url": null
|
| 125 |
+
}
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"5": {
|
| 129 |
+
"title": "Looking for a needle in a haystack: A comprehensive study of hallucinations in neural machine translation.",
|
| 130 |
+
"author": "Guerreiro, N. M., Voita, E. & Martins, A.",
|
| 131 |
+
"venue": "In Vlachos, A. & Augenstein, I. (eds.) Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 1059\u20131075, DOI: 10.18653/v1/2023.eacl-main.75 (Association for Computational Linguistics, Dubrovnik, Croatia, 2023).",
|
| 132 |
+
"url": null
|
| 133 |
+
}
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"6": {
|
| 137 |
+
"title": "Sentence-BERT: Sentence embeddings using Siamese BERT-networks.",
|
| 138 |
+
"author": "Reimers, N. & Gurevych, I.",
|
| 139 |
+
"venue": "In Inui, K., Jiang, J., Ng, V. & Wan, X. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3982\u20133992, DOI: 10.18653/v1/D19-1410 (Association for Computational Linguistics, Hong Kong, China, 2019).",
|
| 140 |
+
"url": null
|
| 141 |
+
}
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"7": {
|
| 145 |
+
"title": "An efficient framework for sentence similarity modeling.",
|
| 146 |
+
"author": "Quan, Z. et al.",
|
| 147 |
+
"venue": "\\JournalTitleIEEE/ACM Transactions on Audio, Speech, and Language Processing 27, 853\u2013865, DOI: 10.1109/TASLP.2019.2899494 (2019).",
|
| 148 |
+
"url": null
|
| 149 |
+
}
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"8": {
|
| 153 |
+
"title": "A novel sentence similarity model with word embedding based on convolutional neural network.",
|
| 154 |
+
"author": "Yao, H., Liu, H. & Zhang, P.",
|
| 155 |
+
"venue": "\\JournalTitleConcurrency and Computation: Practice and Experience 30, e4415, DOI: https://doi.org/10.1002/cpe.4415 (2018).",
|
| 156 |
+
"url": null
|
| 157 |
+
}
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"9": {
|
| 161 |
+
"title": "DeCLUTR: Deep contrastive learning for unsupervised textual representations.",
|
| 162 |
+
"author": "Giorgi, J., Nitski, O., Wang, B. & Bader, G.",
|
| 163 |
+
"venue": "In Zong, C., Xia, F., Li, W. & Navigli, R. (eds.) Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 879\u2013895, DOI: 10.18653/v1/2021.acl-long.72 (Association for Computational Linguistics, Online, 2021).",
|
| 164 |
+
"url": null
|
| 165 |
+
}
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"10": {
|
| 169 |
+
"title": "Unsupervised keyword combination query generation from online health related content for evidence-based fact checking.",
|
| 170 |
+
"author": "Deka, P., Jurek-Loughrey, A. & Deepak.",
|
| 171 |
+
"venue": "\\JournalTitleThe 23rd International Conference on Information Integration and Web Intelligence DOI: 10.1145/3487664.3487701 (2021).",
|
| 172 |
+
"url": null
|
| 173 |
+
}
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"11": {
|
| 177 |
+
"title": "Predicting citation counts based on deep neural network learning techniques.",
|
| 178 |
+
"author": "Abrishami, A. & Aliakbary, S.",
|
| 179 |
+
"venue": "\\JournalTitleJournal of Informetrics 13, 485\u2013499, DOI: https://doi.org/10.1016/j.joi.2019.02.011 (2019).",
|
| 180 |
+
"url": null
|
| 181 |
+
}
|
| 182 |
+
},
|
| 183 |
+
{
|
| 184 |
+
"12": {
|
| 185 |
+
"title": "Detecting research focus and research fronts in the medical big data field using co-word and co-citation analysis.",
|
| 186 |
+
"author": "Zhang, T., Chi, H. & Ouyang, Z.",
|
| 187 |
+
"venue": "In 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), 313\u2013320, DOI: 10.1109/HPCC/SmartCity/DSS.2018.00072 (2018).",
|
| 188 |
+
"url": null
|
| 189 |
+
}
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"13": {
|
| 193 |
+
"title": "Predicting citation patterns: Defining and determining influence.",
|
| 194 |
+
"author": "Brizan, D. G., Gallagher, K., Jahangir, A. & Brown, T.",
|
| 195 |
+
"venue": "\\JournalTitleScientometrics 108, 183\u2013200, DOI: 10.1007/s11192-016-1950-1 (2016).",
|
| 196 |
+
"url": null
|
| 197 |
+
}
|
| 198 |
+
},
|
| 199 |
+
{
|
| 200 |
+
"14": {
|
| 201 |
+
"title": "A semantic similarity-based identification method for implicit citation functions and sentiments information.",
|
| 202 |
+
"author": "Malkawi, R., Daradkeh, M., El-Hassan, A. & Petrov, P.",
|
| 203 |
+
"venue": "\\JournalTitleInformation 13, DOI: 10.3390/info13110546 (2022).",
|
| 204 |
+
"url": null
|
| 205 |
+
}
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"15": {
|
| 209 |
+
"title": "Discovering related scientific literature beyond semantic similarity: A new co-citation approach.",
|
| 210 |
+
"author": "Rodriguez-Prieto, O., Araujo, L. & Martinez-Romo, J.",
|
| 211 |
+
"venue": "\\JournalTitleScientometrics 120, 105\u2013127, DOI: 10.1007/s11192-019-03125-9 (2019).",
|
| 212 |
+
"url": null
|
| 213 |
+
}
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"16": {
|
| 217 |
+
"title": "Extended co-citation search: Graph-based document retrieval on a co-citation network containing citation context information.",
|
| 218 |
+
"author": "Eto, M.",
|
| 219 |
+
"venue": "\\JournalTitleInformation Processing and Management 56, 102046, DOI: https://doi.org/10.1016/j.ipm.2019.05.007 (2019).",
|
| 220 |
+
"url": null
|
| 221 |
+
}
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"17": {
|
| 225 |
+
"title": "Multi-modal adversarial autoencoders for recommendations of citations and subject labels.",
|
| 226 |
+
"author": "Galke, L., Mai, F., Vagliano, I. & Scherp, A.",
|
| 227 |
+
"venue": "In Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization, UMAP \u201918, 197\u2013205, DOI: 10.1145/3209219.3209236 (Association for Computing Machinery, New York, NY, USA, 2018).",
|
| 228 |
+
"url": null
|
| 229 |
+
}
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"18": {
|
| 233 |
+
"title": "The closer the better: Similarity of publication pairs at different cocitation levels.",
|
| 234 |
+
"author": "Colavizza, G., Boyack, K. W., van Eck, N. J. & Waltman, L.",
|
| 235 |
+
"venue": "\\JournalTitleJournal of the Association for Information Science and Technology 69, 600\u2013609, DOI: https://doi.org/10.1002/asi.23981 (2018).",
|
| 236 |
+
"url": null
|
| 237 |
+
}
|
| 238 |
+
},
|
| 239 |
+
{
|
| 240 |
+
"19": {
|
| 241 |
+
"title": "Citation proximity analysis (cpa) : A new approach for identifying related work based on co-citation analysis.",
|
| 242 |
+
"author": "Gipp, B. & Beel, J.",
|
| 243 |
+
"venue": "In Computer Science (2009).",
|
| 244 |
+
"url": null
|
| 245 |
+
}
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"20": {
|
| 249 |
+
"title": "Sparsely activated mixture-of-experts are robust multi-task learners.",
|
| 250 |
+
"author": "Gupta, S. et al.",
|
| 251 |
+
"venue": "\\JournalTitlearXiv DOI: 10.48550/arXiv.2204.07689 (2022).",
|
| 252 |
+
"url": null
|
| 253 |
+
}
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"21": {
|
| 257 |
+
"title": "Open llm leaderboard.",
|
| 258 |
+
"author": "Beeching, E. et al.",
|
| 259 |
+
"venue": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard (2023).",
|
| 260 |
+
"url": null
|
| 261 |
+
}
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"22": {
|
| 265 |
+
"title": "Think you have solved question answering? try arc, the ai2 reasoning challenge (2018).",
|
| 266 |
+
"author": "Clark, P. et al.",
|
| 267 |
+
"venue": "1803.05457.",
|
| 268 |
+
"url": null
|
| 269 |
+
}
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"23": {
|
| 273 |
+
"title": "Hellaswag: Can a machine really finish your sentence? (2019).",
|
| 274 |
+
"author": "Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A. & Choi, Y.",
|
| 275 |
+
"venue": "1905.07830.",
|
| 276 |
+
"url": null
|
| 277 |
+
}
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"24": {
|
| 281 |
+
"title": "Measuring massive multitask language understanding (2021).",
|
| 282 |
+
"author": "Hendrycks, D. et al.",
|
| 283 |
+
"venue": "2009.03300.",
|
| 284 |
+
"url": null
|
| 285 |
+
}
|
| 286 |
+
},
|
| 287 |
+
{
|
| 288 |
+
"25": {
|
| 289 |
+
"title": "Truthfulqa: Measuring how models mimic human falsehoods (2022).",
|
| 290 |
+
"author": "Lin, S., Hilton, J. & Evans, O.",
|
| 291 |
+
"venue": "2109.07958.",
|
| 292 |
+
"url": null
|
| 293 |
+
}
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"26": {
|
| 297 |
+
"title": "WINOGRANDE: an adversarial winograd schema challenge at scale (2019).",
|
| 298 |
+
"author": "Sakaguchi, K., Bras, R. L., Bhagavatula, C. & Choi, Y.",
|
| 299 |
+
"venue": "1907.10641.",
|
| 300 |
+
"url": null
|
| 301 |
+
}
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"27": {
|
| 305 |
+
"title": "Training verifiers to solve math word problems (2021).",
|
| 306 |
+
"author": "Cobbe, K. et al.",
|
| 307 |
+
"venue": "2110.14168.",
|
| 308 |
+
"url": null
|
| 309 |
+
}
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"28": {
|
| 313 |
+
"title": "The truth is in there: Improving reasoning in language models with layer-selective rank reduction, DOI: 10.48550/arXiv.2312.13558.",
|
| 314 |
+
"author": "Sharma, P., Ash, J. T. & Misra, D.",
|
| 315 |
+
"venue": "2312.13558[cs].",
|
| 316 |
+
"url": null
|
| 317 |
+
}
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"29": {
|
| 321 |
+
"title": "Protein-protein interaction prediction is achievable with large language models, DOI: 10.1101/2023.06.07.544109.",
|
| 322 |
+
"author": "Hallee, L. & Gleghorn, J. P.",
|
| 323 |
+
"venue": null,
|
| 324 |
+
"url": null
|
| 325 |
+
}
|
| 326 |
+
},
|
| 327 |
+
{
|
| 328 |
+
"30": {
|
| 329 |
+
"title": "Monarch mixer: A simple sub-quadratic GEMM-based architecture, DOI: 10.48550/arXiv.2310.12109.",
|
| 330 |
+
"author": "Fu, D. Y. et al.",
|
| 331 |
+
"venue": "2310.12109[cs].",
|
| 332 |
+
"url": null
|
| 333 |
+
}
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"31": {
|
| 337 |
+
"title": "Language models are unsupervised multitask learners.",
|
| 338 |
+
"author": "Radford, A. et al.",
|
| 339 |
+
"venue": "\\JournalTitleOpenAI (2019).",
|
| 340 |
+
"url": null
|
| 341 |
+
}
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"32": {
|
| 345 |
+
"title": "Language models are few-shot learners, DOI: 10.48550/arXiv.2005.14165.",
|
| 346 |
+
"author": "Brown, T. B. et al.",
|
| 347 |
+
"venue": "2005.14165[cs].",
|
| 348 |
+
"url": null
|
| 349 |
+
}
|
| 350 |
+
},
|
| 351 |
+
{
|
| 352 |
+
"33": {
|
| 353 |
+
"title": "GPT-4 technical report, DOI: 10.48550/arXiv.2303.08774.",
|
| 354 |
+
"author": "OpenAI et al.",
|
| 355 |
+
"venue": "2303.08774[cs].",
|
| 356 |
+
"url": null
|
| 357 |
+
}
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"34": {
|
| 361 |
+
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding, DOI: 10.48550/arXiv.1810.04805.",
|
| 362 |
+
"author": "Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K.",
|
| 363 |
+
"venue": "1810.04805[cs].",
|
| 364 |
+
"url": null
|
| 365 |
+
}
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"35": {
|
| 369 |
+
"title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer, DOI: 10.48550/arXiv.1701.06538.",
|
| 370 |
+
"author": "Shazeer, N. et al.",
|
| 371 |
+
"venue": "1701.06538[cs,stat].",
|
| 372 |
+
"url": null
|
| 373 |
+
}
|
| 374 |
+
},
|
| 375 |
+
{
|
| 376 |
+
"36": {
|
| 377 |
+
"title": "Mixtral of experts.",
|
| 378 |
+
"author": "AI, M.",
|
| 379 |
+
"venue": "Section: news.",
|
| 380 |
+
"url": null
|
| 381 |
+
}
|
| 382 |
+
},
|
| 383 |
+
{
|
| 384 |
+
"37": {
|
| 385 |
+
"title": "MoEBERT: from BERT to mixture-of-experts via importance-guided adaptation, DOI: 10.48550/arXiv.2204.07675.",
|
| 386 |
+
"author": "Zuo, S. et al.",
|
| 387 |
+
"venue": "2204.07675[cs].",
|
| 388 |
+
"url": null
|
| 389 |
+
}
|
| 390 |
+
},
|
| 391 |
+
{
|
| 392 |
+
"38": {
|
| 393 |
+
"title": "Mod-squad: Designing mixture of experts as modular multi-task learners.",
|
| 394 |
+
"author": "Chen, Z. et al.",
|
| 395 |
+
"venue": "\\JournalTitlearXiv DOI: 10.48550/arXiv.2212.08066 (2022).",
|
| 396 |
+
"url": null
|
| 397 |
+
}
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"39": {
|
| 401 |
+
"title": "MiniLM: Deep self-attention distillation for task-agnostic compression of pre-trained transformers, DOI: 10.48550/arXiv.2002.10957.",
|
| 402 |
+
"author": "Wang, W. et al.",
|
| 403 |
+
"venue": "2002.10957[cs].",
|
| 404 |
+
"url": null
|
| 405 |
+
}
|
| 406 |
+
},
|
| 407 |
+
{
|
| 408 |
+
"40": {
|
| 409 |
+
"title": "PAQ: 65 million probably-asked questions and what you can do with them, DOI: 10.48550/arXiv.2102.07033.",
|
| 410 |
+
"author": "Lewis, P. et al.",
|
| 411 |
+
"venue": "2102.07033[cs].",
|
| 412 |
+
"url": null
|
| 413 |
+
}
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"41": {
|
| 417 |
+
"title": "GooAQ: Open question answering with diverse answer types, DOI: 10.48550/arXiv.2104.08727.",
|
| 418 |
+
"author": "Khashabi, D. et al.",
|
| 419 |
+
"venue": "2104.08727[cs].",
|
| 420 |
+
"url": null
|
| 421 |
+
}
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"42": {
|
| 425 |
+
"title": "SearchQA: A new qanda dataset augmented with context from a search engine, DOI: 10.48550/arXiv.1704.05179.",
|
| 426 |
+
"author": "Dunn, M. et al.",
|
| 427 |
+
"venue": "1704.05179[cs].",
|
| 428 |
+
"url": null
|
| 429 |
+
}
|
| 430 |
+
},
|
| 431 |
+
{
|
| 432 |
+
"43": {
|
| 433 |
+
"title": "WikiHow: A large scale text summarization dataset, DOI: 10.48550/arXiv.1810.09305.",
|
| 434 |
+
"author": "Koupaee, M. & Wang, W. Y.",
|
| 435 |
+
"venue": "1810.09305[cs].",
|
| 436 |
+
"url": null
|
| 437 |
+
}
|
| 438 |
+
},
|
| 439 |
+
{
|
| 440 |
+
"44": {
|
| 441 |
+
"title": "A repository of conversational datasets, DOI: 10.48550/arXiv.1904.06472.",
|
| 442 |
+
"author": "Henderson, M. et al.",
|
| 443 |
+
"venue": "1904.06472[cs].",
|
| 444 |
+
"url": null
|
| 445 |
+
}
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"45": {
|
| 449 |
+
"title": "SciBERT: A pretrained language model for scientific text, DOI: 10.48550/arXiv.1903.10676.",
|
| 450 |
+
"author": "Beltagy, I., Lo, K. & Cohan, A.",
|
| 451 |
+
"venue": "1903.10676[cs].",
|
| 452 |
+
"url": null
|
| 453 |
+
}
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"46": {
|
| 457 |
+
"title": "MPNet: Masked and permuted pre-training for language understanding, DOI: 10.48550/arXiv.2004.09297.",
|
| 458 |
+
"author": "Song, K., Tan, X., Qin, T., Lu, J. & Liu, T.-Y.",
|
| 459 |
+
"venue": "2004.09297[cs].",
|
| 460 |
+
"url": null
|
| 461 |
+
}
|
| 462 |
+
},
|
| 463 |
+
{
|
| 464 |
+
"47": {
|
| 465 |
+
"title": "DeCLUTR: Deep contrastive learning for unsupervised textual representations, DOI: 10.48550/arXiv.2006.03659.",
|
| 466 |
+
"author": "Giorgi, J., Nitski, O., Wang, B. & Bader, G.",
|
| 467 |
+
"venue": "2006.03659[cs].",
|
| 468 |
+
"url": null
|
| 469 |
+
}
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"48": {
|
| 473 |
+
"title": "Domain-specific language model pretraining for biomedical natural language processing.",
|
| 474 |
+
"author": "Gu, Y. et al.",
|
| 475 |
+
"venue": "\\JournalTitleACM Trans. Comput. Healthcare 3, 1\u201323, DOI: 10.1145/3458754 (2022).",
|
| 476 |
+
"url": null
|
| 477 |
+
}
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"49": {
|
| 481 |
+
"title": "ClinicalBERT: Modeling clinical notes and predicting hospital readmission, DOI: 10.48550/arXiv.1904.05342.",
|
| 482 |
+
"author": "Huang, K., Altosaar, J. & Ranganath, R.",
|
| 483 |
+
"venue": "1904.05342[cs].",
|
| 484 |
+
"url": null
|
| 485 |
+
}
|
| 486 |
+
},
|
| 487 |
+
{
|
| 488 |
+
"50": {
|
| 489 |
+
"title": "cdsBERT - extending protein language models with codon awareness, DOI: 10.1101/2023.09.15.558027.",
|
| 490 |
+
"author": "Hallee, L., Rafailidis, N. & Gleghorn, J. P.",
|
| 491 |
+
"venue": null,
|
| 492 |
+
"url": null
|
| 493 |
+
}
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"51": {
|
| 497 |
+
"title": "Simple CLIP, DOI: 10.5281/zenodo.6845731 (2021).",
|
| 498 |
+
"author": "Shariatnia, M. M.",
|
| 499 |
+
"venue": null,
|
| 500 |
+
"url": null
|
| 501 |
+
}
|
| 502 |
+
},
|
| 503 |
+
{
|
| 504 |
+
"52": {
|
| 505 |
+
"title": "Efficient natural language response suggestion for smart reply.",
|
| 506 |
+
"author": "Henderson, M. et al.",
|
| 507 |
+
"venue": "\\JournalTitleArXiv abs/1705.00652 (2017).",
|
| 508 |
+
"url": null
|
| 509 |
+
}
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"53": {
|
| 513 |
+
"title": "Super-convergence: Very fast training of neural networks using large learning rates, DOI: 10.48550/arXiv.1708.07120.",
|
| 514 |
+
"author": "Smith, L. N. & Topin, N.",
|
| 515 |
+
"venue": "1708.07120[cs,stat].",
|
| 516 |
+
"url": null
|
| 517 |
+
}
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"54": {
|
| 521 |
+
"title": "DeepGraphLearning/torchdrug.",
|
| 522 |
+
"author": "TorchDrug.",
|
| 523 |
+
"venue": "Original-date: 2021-08-10T03:51:24Z.",
|
| 524 |
+
"url": null
|
| 525 |
+
}
|
| 526 |
+
},
|
| 527 |
+
{
|
| 528 |
+
"55": {
|
| 529 |
+
"title": "SaProt: Protein language modeling with structure-aware vocabulary, DOI: 10.1101/2023.10.01.560349.",
|
| 530 |
+
"author": "Su, J. et al.",
|
| 531 |
+
"venue": null,
|
| 532 |
+
"url": null
|
| 533 |
+
}
|
| 534 |
+
}
|
| 535 |
+
],
|
| 536 |
+
"url": "http://arxiv.org/html/2401.15713v3"
|
| 537 |
+
}
|
20241217/2402.09527v11.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2402.13532v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2402.13773v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2402.18264v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2403.10276v2.json
ADDED
|
@@ -0,0 +1,740 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Job loss disrupts individuals\u2019 mobility and their exploratory patterns",
|
| 3 |
+
"abstract": "In recent years, human mobility research has discovered universal patterns capable of describing how people move. These regularities have been shown to partly depend on individual and environmental characteristics (e.g., gender, rural/urban, country).\nIn this work, we show that life-course events, such as job loss, can disrupt individual mobility patterns. Adversely affecting individuals\u2019 well-being and potentially increasing the risk of social and economic inequalities, we show that job loss drives a significant change in the exploratory behaviour of individuals with changes that intensify over time since job loss.\nOur findings shed light on the dynamics of employment-related behavior at scale, providing a deeper understanding of key components in human mobility regularities. These drivers can facilitate targeted social interventions to support the most vulnerable populations.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Economic and human behavioural statistics are crucial for effective decision-making. Large-scale population surveys have been invaluable in observing economic shocks and their implications. For example, unemployment data serve as a vital indicator of an economy\u2019s health and performance [1 ###reference_b1###]: when workers become unemployed, it affects their well-being and that of their families, it diminishes their purchasing power, and impacts the overall economy.\nHowever, conventional methods to track unemployment and its implications have been challenged by survey participation rates decline [2 ###reference_b2###, 3 ###reference_b3###] especially in developing countries [4 ###reference_b4###, 5 ###reference_b5###].\nRecently, a transformative shift has emerged through the utilization of large-scale behavioural data collected from technologies like mobile phones, GPS trackers, social media platforms, and credit cards. This shift have been instrumental in advancing research in human mobility [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###], financial well-being and purchase behaviour [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###], segregation and economic inequalities [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###], crime [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###] and public health [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 23 ###reference_b23###].\nFew studies in human mobility research have managed to estimate job loss at a fine-grained level [28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###] with even fewer studies focused on the behavioural specificities of unemployed individuals [30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###]. However, the impact of job loss on individual mobility behavior at scale still remains a largely unexplored area of study.\nIn this context, the contribution of our work is twofold. Firstly, we introduce a real-time methodology for inferring unemployment status based on individual GPS trajectories. Secondly, we provide evidence of significant changes in mobility behaviour regularities following a job loss, particularly affecting vulnerable groups already at an increased risk of segregation [19 ###reference_b19###].\nWe leverage a dataset of privacy-enhanced longitudinal GPS mobility traces of nearly 1 million anonymous opted-in individuals from January 3, 2020, to September 1, 2020, across several US states.\nIn order to preserve privacy, the data provider obfuscates devices\u2019 home locations to the Census Block Group level and removes visits to sensitive points of interest from the dataset.\nThe states are selected based on their diverse workforce composition profiles, enabling us to estimate unemployment at scale and analyze multiple facets of individuals\u2019 mobility behaviour following job loss.\nTo ensure the representativeness of the GPS data and address potential sample biases [20 ###reference_b20###], we employ a reweighting technique. This process generates a resampled cohort that reflects the demographic characteristics and the employed workforce across industrial sectors in all the states under study. We evaluate our methodology in the context of the COVID-19 pandemic, discussing its versatility for more general systemic shocks.\nOur analysis sheds light on the impacts of job loss, providing a comprehensive, multidimensional view of individuals\u2019 mobility patterns. This includes their geographic displacement, time allocation, and set of visited locations. We also show, through a temporal-independent analysis of employed versus unemployed behavioural patterns, that there is an increasing disparity in mobility behaviour between employed and unemployed individuals since the time of job loss. In this perspective, we also illustrate how demographic factors such as sex, age, income, race, and education level can intensify the impact of job loss on individual mobility pattern contraction.\nOverall, our results provide evidence for the long-term effects of unemployment on individuals\u2019 daily lives. Job loss, as a major event in an individual\u2019s life, not only perpetrates but also exacerbates existing socio-demographic disparities in mobility behaviours. While prior literature on human mobility has identified universal characteristics in mobility patterns [33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###], our findings highlight that individual life-course events, such as job loss, can affect these regularities at the individual level. Such events have the potential to influence people\u2019s habits, as well as their social and psychological well-being."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Results",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Inferring individual employment status",
|
| 21 |
+
"text": "To determine an individual\u2019s employment status, we have devised a procedure that integrates both location and survey data (see Fig. 1 ###reference_###).\nWe use a large and longitudinal dataset of privacy-enhanced GPS location data collected across seven US states from January to September 2020 to enrich census survey data provided by the US bureau. Particularly, we use the Longitudinal Employer-Household Dynamics (LEHD) Origin-Destination Employment Statistics (LODES) [37 ###reference_b37###], which provides privacy-preserved statistics about the US workforce divided by industrial sectors classified with the North American Industry Classification System (NAICS) [38 ###reference_b38###] and provides data about how many individuals are employed in a specific NAICS sector, based on their census block groups (CBGs) [39 ###reference_b39###] of residence and workplace. We refer to the Methods section for further details.\nFor each individual, we first identify stop locations, defined as sequences of GPS coordinates within a 65-meter radius where a user stayed for a minimum of 5 minutes (Fig. 1 ###reference_###A). Then, after detecting the individual\u2019s residential and workplace locations (Fig. 1 ###reference_###B), we enrich these locations with the LODES data information.\nFor those individuals with a detected work location, we label them as employed during the period in which their workplace location is identified. Each of these individuals is further assigned in probability a job, more specifically, a NAICS sector based on available survey data looking at their residential and workplace CBGs (Fig. 1 ###reference_###C).\nAs a following step, individuals are labelled as \u201cat risk of unemployment\u201d based on the reduction in visits to the workplace: if individuals never visit their workplace location, they are considered as potential candidates for unemployment (Fig. 1 ###reference_###D). The determination of employment status is sampled taking into account both the risk status and the NAICS-specific likelihood of working from home at any given time.\nTo account for whether an individual is working from home, we leverage information on the \u201cteleworkability\u201d of jobs, as presented in the study of Dingel and Neiman [40 ###reference_b40###]. For each industrial sector, the data provides the percentage of work that can be performed remotely (Fig. 1 ###reference_###E).\nBased on the individual\u2019s job sector (NAICS), population-wide change in the time spent at work, and the weight of each individual in contributing to that change, we infer the unemployment status over time, thus determining whether the individual is working from home or is unemployed (see Methods section).\n###figure_1###"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Cohort selection and algorithm evaluation",
|
| 27 |
+
"text": "The privacy-enhanced location data provided by the location intelligence company Cuebiq intentionally excludes any direct information about users\u2019 employment to safeguard privacy. This absence of direct job-related information presents challenges in establishing ground truth for individuals\u2019 employment status. Consequently, we evaluate the accuracy of our methodology at an aggregate level by leveraging aggregated monthly statistics from Unemployment Insurance (UI) claims and Local Area Unemployment Statistics (LAUS) datasets (see SI S1 for dataset details). UI claims data offer near real-time information on the number of claimants, reported weekly, providing a timely basis for our algorithm evaluation. In contrast, LAUS data represent official unemployment figures derived through an estimation process that incorporates multiple sources, including UI claims, but are published and consolidated with a longer delay. Additionally, we incorporate state-level employment information from the Bureau of Labor Statistics (BLS) through the Quarterly Census of Employment and Wages (QCEW) program (see SI S1 for additional data details).\nAll our analyses are conducted on a cohort of mobile phone users residing in the US. We employ individual reweighting to reconstruct a representative cohort sample that accurately mirrors both (i) population-wide representativeness, based on census block group (CBG) population data, and (ii) representativeness of the employed workforce population in each state across various industrial NAICS sectors, drawing from state-level employment statistics (BLS statistics). This post-stratification procedure is crucial for addressing potential biases within the location data and ensuring the data\u2019s representativeness [20 ###reference_b20###, 41 ###reference_b41###]. It enables us to compare employment status with Unemployment Insurance claims and LAUS unemployment figures and subsequently examine mobility patterns at a population-wide scale. Further details on the post-stratification technique can be found in SI S2A.\nOur study focuses on seven U.S. states: New York, Wyoming, Indiana, Idaho, Washington, North Dakota, and New Mexico. These states were selected to capture diverse workforce compositions spanning primary, secondary, and tertiary economic sectors, as well as geographic diversity across the U.S. (see SI S2B for more details).\nTo assess the reliability of our job detection methodology, we use both LAUS and UI claims data. The UI claims enable near real-time evaluation of the algorithm at the monthly level, segmented by NAICS sectors. At the state level across the seven studied states, we observe a Pearson correlation coefficient of 0.89 between the monthly rate of individuals detected as unemployed (reweighted to match the employed population) and the monthly UI claims rate. Using the official unemployment statistics from the LAUS data, we find a Pearson correlation of 0.72 at the state level and 0.53 at the county level. These results demonstrate that our algorithm is fairly reliable and can estimate unemployment at an aggregate level (see SI S4 for further details on the algorithm evaluation)."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Behavioural disparities between employed and unemployed individuals",
|
| 33 |
+
"text": "The availability of inferred individual employment status data over time provides a unique opportunity to gain insights into the impact of job loss on human mobility. The wealth of information at our disposal allows us to characterize and quantify changes in behaviour by comparing the daily mobility of individuals identified as employed or unemployed, offering a better understanding of shifts in mobility patterns following a job loss. In this study, we address two key questions: (i) How did individuals who experienced a job loss navigate through the pandemic period?; And, more broadly, (ii) what are the effects of job loss on an individual\u2019s mobility behaviour, and what happens when individuals face a prolonged period of unemployment?\nTo address these questions and ensure a fair comparison between a population of employed individuals and a population of unemployed individuals, we exclude all stop locations associated with an individual\u2019s workplace from our analysis. Therefore, our analysis focuses on extra-work individuals\u2019 mobility patterns, with a specific focus on those individuals who had been employed (even briefly) between January 3rd, 2020, and March 7th, 2020, namely before the WHO declaration of the COVID-19 pandemic (March 11th, 2020).\nConsidering systemic external factors in our analysis and given the significant stress placed on the labour market by the pandemic, we have the ideal conditions to study the consequences and gain a comprehensive understanding of the effects of job loss on individuals\u2019 mobility behaviour.\n###figure_2### Note that, as previously explained, our analysis focuses on individuals who were employed, even briefly, before the pandemic. Therefore, the curves representing the mobility indicators for unemployed individuals in the baseline period may not be representative."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.3.1",
|
| 37 |
+
"parent_section_id": "2.3",
|
| 38 |
+
"section_name": "disproportionate impact of the pandemic on unemployed individuals\u2019 mobility",
|
| 39 |
+
"text": "To provide a comprehensive analysis of changes in mobility, we measure within-individual variations by comparing activities to a baseline period preceding the pandemic (February 1st, 2020 - March 7th, 2020). Our focus revolves around three key well-known mobility metrics: (1) the radius of gyration () [6 ###reference_b6###], which measures the characteristic geographical displacement of individuals; (2) the time allocation entropy () [42 ###reference_b42###], which measures the distribution of time allocation in each visited location; and (3) the users\u2019 locations\u2019 capacity, denoted as , which captures the number of a user\u2019s familiar locations, alongside the number of locations added () to, and deleted () from the set of familiar locations within a specific time interval [34 ###reference_b34###] (find the formal definition of the mobility metrics in the \u201cMethods\u201d section). Collectively, these measures offer a multidimensional perspective on both the characteristic displacement and the complexity of individuals\u2019 exploratory behaviour.\nIn Fig. 2 ###reference_###, we present the results over time for each of these metrics and the relative difference over time between the group of employed and unemployed individuals. We consistently compare the mobility patterns of inferred unemployed individuals with those of employed individuals under similar pandemic-related conditions and restrictions. This approach helps isolate the specific effects of unemployment from broader external factors, such as lockdown measures. All the mobility metrics are computed for a window of 28 days with a 1-day shift. Note that, as previously explained, our analysis focuses on individuals who were employed, even briefly, before the pandemic. Therefore, the curves representing the mobility indicators for unemployed individuals in the baseline period (grey-shaded area) are not informative due to the low number of unemployed individuals during that period.\nThe results shown in Fig. 2 ###reference_### reveal a substantial impact of the pandemic on individual mobility patterns, particularly among unemployed individuals. Notably, the group of unemployed individuals exhibits lower overall activity levels across all the mobility metrics under study. Moreover, we observe that as the pandemic progresses, the mobility gaps between the employed and unemployed groups widen. While the reduction in mobility for the unemployed is limited when examining the individuals\u2019 characteristic displacement measured by the radius of gyration, with employed individuals reaching a low point of and unemployed individuals reaching a low point of , the same is not true when looking at regularity and exploration patterns. The drop in activity is particularly pronounced for unemployed individuals when examining the time allocation entropy, with low points of and for employed and unemployed individuals respectively, and capacity, with low points of and for employed and unemployed individuals respectively.\nFrom this analysis, it becomes evident that the routinary and exploratory behaviours of unemployed individuals, as measured by the time allocation entropy and by the capacity (together with its location turnover of added and deleted locations), were more affected than those of employed individuals. Moreover, over time, there is an evident increasing trend in the difference between the behaviours of the two groups. The difference between the two groups at the end of the period under study is for the radius of gyration, for the time allocation entropy and for the capacity.\nInterestingly, following the gradual reduction of COVID-19 restrictions, there appears to be a clear (partial) recovery for all the different facets of mobility behaviour we analyzed.\n###figure_3###"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3.2",
|
| 43 |
+
"parent_section_id": "2.3",
|
| 44 |
+
"section_name": "Prolonged unemployment and the deterioration of mobility behaviour",
|
| 45 |
+
"text": "In the previous section, we provided insights into the collective mobility dynamics of employed and unemployed individuals, uncovering a disproportionate mobility response during the pandemic period between the two groups. To extend the validity of our findings beyond the pandemic conditions and ensure their generalizability to other possible systemic shocks, we further investigate into the growing divergence over time between the mobility behaviours of employed and unemployed individuals. Through the following analysis, we aim to understand the effects of job loss on individual-level mobility behaviour, assessing whether a prolonged period of unemployment leaves a lasting impact.\nHence, we present a robust and general framework for detecting and tracking unemployment potentially adaptable to different systemic shocks. In particular, we propose a time-independent analysis of employed/unemployed behavioural patterns which tries to understand whether the duration of unemployment contributes to the growing disparity between the two groups\u2019 mobility behaviour.\nDue to the period during which the data was collected, we first need to consider the non-negligible impact of Non-Pharmaceutical Interventions, and more in general of the pandemic, on the general population mobility during 2020. To mitigate the effect of the COVID-19 pandemic on the results, we standardize each individual\u2019s mobility indicator by calculating the z-score using the average and standard deviation of the employed group\u2019s indicators on a specific day . Then, to better understand the effects of a job loss on an individual, we align the mobility indicators of all individuals by shifting time so that represents the time when an individual lost their job. This approach enables consistent comparisons of individuals\u2019 mobility behaviour at different times with respect to the date at which they lost their jobs.\nAs illustrated in Fig. 3 ###reference_###, both the radius of gyration (at a smaller level) and time allocation entropy (at a larger level) were affected and gradually decreased over time, reaching almost and standard deviations, respectively, compared to when individuals were employed. Although the radius of gyration seemed to be less affected, the relative time allocation entropy of individuals who lost their jobs decreased sharply and constantly the longer they were unemployed. This large reduction in time allocation entropy may be related to the tendency of unemployed individuals to spend a significant fraction of time at home [31 ###reference_b31###, 43 ###reference_b43###].\nA similar dynamic is observed in the capacity of individuals, which displays a sharp decrease of more than standard deviations, followed by a slow recovery after approximately 60 days. The added locations () to the set of familiar places exhibited similar behaviour as the capacity, with a noticeable decrease after an individual loses their job. On the other hand, the deletion of familiar locations () increases abruptly when individuals lose their jobs, followed by a sharp decrease. Both the added and deleted locations then remain significantly low, indicating an overall lower turnover in the set of an individual\u2019s familiar locations. Despite a modest recovery after approximately two months, the results highlight the clear and persistent impact of unemployment on limiting individuals\u2019 abilities to explore new opportunities in physical space.\nThe drop in capacity () together with the decrease in the number of added () and deleted () locations highlight a reduced location turnover and a sustained contraction into the individuals\u2019 set of familiar locations. For an individual, this scenario may indicate a potential decrease in exposure to opportunities and an increased risk of isolation after experiencing a job loss."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.4",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "Socio-demographic factors in job loss behavioural changes",
|
| 51 |
+
"text": "To get a better understanding of the implications of prolonged unemployment, we evaluate and quantify socio-demographic differences in mobility patterns among individuals enduring a prolonged period of unemployment.\nLeveraging socio-demographic information from the Longitudinal Employer-Household Dynamics (LODES) dataset [37 ###reference_b37###], including Sex, Age, Income, Race, and Education, we analyze demographic differences in mobility behaviours.\nBuilding on the results presented in Fig. 3 ###reference_###, we disaggregate the mobility behaviour of unemployed individuals () based on the individual\u2019s socio-demographic group (see SI S5 for more details), revealing significant disparities in the mobility behaviour of unemployed individuals when compared with the mobility behaviour of employed individuals (see SI Tab. S3).\nIn Fig. 4 ###reference_###, we compare each mobility indicator of individuals who fall in a particular socio-demographic category against the population of employed individuals. The results show significant differences between male and female individuals across all three mobility indicators, namely radius of gyration (), time allocation entropy (), and capacity (). Unemployed women generally exhibit lower values of mobility exploration (, and ) and diversity ( and ).\nRegarding individuals\u2019Age, differences in mobility behaviour are relatively smaller, with older individuals () showing a more pronounced reduction in their characteristic geographical displacement () compared to other groups.\nIncome disparities reveal smaller differences in the radius of gyration (), whereas richer individuals () exhibit lower values in their time allocation entropy (). Capacity (), in contrast, is lower for those individuals reporting lower income values ().\nIn terms of Race, Asians display lower values in all three mobility metrics, followed by Black or African American individuals and then White individuals. Specific ethnic groups (e.g., American Indian or Alaska Native, Native Hawaiian or Other Pacific Islander and Two or More Race Groups) have been excluded from the analysis due to small sample sizes.\nEducational levels show fewer differences in mobility behaviour between groups, with no significant differences in radius of gyration (). Lower values of time allocation entropy () and capacity () are observed in individuals with Bachelor degree or advanced degrees.\nWe test the significance of the differences between demographic groups using Welch\u2019s t-test [44 ###reference_b44###] (see SI Tab. S4), considering the behavioural information from days after job-loss up to days (to remove the initial transitioning phase).\nTo validate our understanding of job loss as an important life-course event that can shape individual mobility patterns, we conducted a comparative analysis between unemployed individuals and their employed counterparts from the same socio-demographic group. This comparison demonstrates that the reduced mobility behaviour occurring after a job loss is consistently present, although with different intensity, in all the studied population strata (see SI Tab. S5 for the statistics).\nTaken together, these results substantiate the interpretation of socio-demographic characteristics as a factor to be taken into account when aiming to mitigate the effects of unemployment on individuals in mobility patterns.\nThe differences observed in mobility indicators after a job loss are attributable to both the transition to joblessness and the inherent mobility tendencies within socio-demographic groups [45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###, 48 ###reference_b48###].\n###figure_4###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Discussion",
|
| 57 |
+
"text": "The availability of massive digital traces collected through mobile phones has become an important proxy for studying individual behaviour at population scales. The size and granularity of these datasets have revealed crucial insights into the regularities of human mobility and have exposed universal properties of human mobility patterns [42 ###reference_b42###, 49 ###reference_b49###, 50 ###reference_b50###, 36 ###reference_b36###, 34 ###reference_b34###, 9 ###reference_b9###, 35 ###reference_b35###]. Interestingly, within the numerous mobility models for human mobility, the notion of \u201copportunities\u201d consistently emerges as a key driver of individual movement patterns [51 ###reference_b51###, 52 ###reference_b52###, 6 ###reference_b6###, 49 ###reference_b49###, 50 ###reference_b50###, 8 ###reference_b8###, 53 ###reference_b53###]. This notion suggests that individuals navigate physical space in pursuit of various kinds of opportunities spanning social, educational, and economic domains.\nIn this perspective, understanding whether individuals transitioning to a state of unemployment can still access and benefit from the opportunities that their social and physical environments offer is of great social importance [54 ###reference_b54###, 55 ###reference_b55###, 56 ###reference_b56###].\nTo proxy social exposure and access to opportunities, we leverage an individual-level longitudinal dataset of fine-grained mobility behaviour alongside secondary demographic data [40 ###reference_b40###, 57 ###reference_b57###, 39 ###reference_b39###, 38 ###reference_b38###].\nWe employ reweighting and rescaling techniques to address potential sample biases in the GPS data and to mitigate the effects of COVID-19 restrictions on mobility behaviour analysis.\nThe empirical evidence we present highlights that individuals facing unemployment significantly decrease their mobility, suggesting a reduction in their ability to explore and exploit available opportunities.\nThis effect worsens over time, leading to a differentiation of the population into employed and unemployed subgroups with persistent behavioural differences.\nIn particular, the impact of job loss manifests differently across various socio-demographic groups, highlighting how some of these already vulnerable communities may be disproportionately affected [46 ###reference_b46###, 47 ###reference_b47###, 45 ###reference_b45###, 56 ###reference_b56###, 48 ###reference_b48###].\nIn this context, our work underscores the significant influence of personal circumstances or life events, such as job loss, on established patterns of human mobility.\nThese life-course events can drive individuals to transition through different states of human mobility regularities, adding a layer of complexity to the notion that mobility patterns can depend on the structure of the surrounding physical space [32 ###reference_b32###, 58 ###reference_b58###, 9 ###reference_b9###] and the demographic attributes of individuals [36 ###reference_b36###, 32 ###reference_b32###, 9 ###reference_b9###].\nFurthermore, for an individual, the reduction in exploration patterns is not only a reflection of the immediate impact of job loss but potentially also signals a broader issue, leading to a decreased exposure to opportunities and an increased risk of social isolation after experiencing job loss [59 ###reference_b59###, 60 ###reference_b60###, 61 ###reference_b61###, 62 ###reference_b62###], reinforcing negative effects on an individual\u2019s well-being [63 ###reference_b63###, 64 ###reference_b64###, 65 ###reference_b65###, 66 ###reference_b66###].\nImportantly, the progressive reduction in individual mobility and the associated decline in social participation [59 ###reference_b59###, 60 ###reference_b60###, 61 ###reference_b61###, 62 ###reference_b62###], could also undermine the potential effectiveness of intervention programs targeting the early stages of unemployment.\nBy leveraging real-time data, our approach can facilitate targeted efforts during these initial phases, enabling more effective mitigation of the enduring and group-specific impacts of job loss [56 ###reference_b56###, 55 ###reference_b55###]."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Methods",
|
| 63 |
+
"text": ""
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Stop locations",
|
| 69 |
+
"text": "The GPS location data was provided by Cuebiq, a location intelligence company that provided through their Cuebiq Data for Good COVID-19 Collaborative program, a dataset of privacy-enhanced GPS locations from users who opted-in to share the data anonymously for research purposes through a CCPA (California Consumer Privacy Act) compliant framework (see SI S1 for more details). To further preserve privacy, the data provider obfuscates users\u2019 precise home and work locations by transforming them to the centroid of the corresponding Census Block Group.\nWe analyze a dataset that spans a period of 9 months, from January 2020 to September 2020 for seven US states including New York, Wyoming, Indiana, Idaho, Washington, North Dakota, and New Mexico.\nWe filter out all users with less than one month of data before declaring a national emergency (March 13, 2020) and less than four months after it. We also require users to have 5 hours per day covered by at least one GPS location. The resulting dataset includes more than 1 million anonymous, opted-in individuals.\nFor all users, we extract their stop events with an algorithm based on Hariharan and Toyama [67 ###reference_b67###]. We define a stop event as a temporal sequence of GPS coordinates in a radius meters where a user stayed for at least minutes.\nFor each user, we then define their stop locations as the stop events that can be considered as part of the same place using the DBSCAN algorithm [68 ###reference_b68###]. With DBSCAN, we group points within a distance of meters to form a cluster with at least stop event. For a more detailed explanation of the GPS data processing please refer to Lucchini et al. [62 ###reference_b62###]."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Residential and Workplace detection",
|
| 75 |
+
"text": "We determined the most likely residential and workplace areas for each user by calculating these areas multiple times over a moving rolling window of 28 days.\nWe then aggregate for each day , for each user , and stop , the amount of time spent in a window distinguishing between:\nResidential time: The amount of time between 8 pm and 4 am spent by the user at stop . Unlike previous studies such as TimeGeo [69 ###reference_b69###], we did not assume the entire weekend as residential time since the US Bureau of Labor Statistics recently estimated that around 34% of employed people work in the weekend [70 ###reference_b70###].\nWorkplace time: On weekdays, the amount of time between 9 am and 5 pm spent by the user at stop . We chose these working hours because they represent the most common working time in the US [71 ###reference_b71###]. Additionally, we assumed that a potential workplace stay should last at least 30 minutes and occur five times a week. These assumptions were similar to those made in previous studies [69 ###reference_b69###].\nWe detect for each user their residential location as the stop location with the largest Residential time during the period that goes from January 3rd to March 7th (before the pandemic).\nTo detect changes in the workplace location for a particular user , we label a stop as workplace location if this stop is not a residential location (to avoid tracking people already working from home) and it has the largest Workplace time in the observed 28 days window.\nTo protect users\u2019 privacy, the residential and workplace locations were blurred and associated with the corresponding Census Block Groups."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.3",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "assignment",
|
| 81 |
+
"text": "The location data does not have direct information about the users\u2019 jobs. To the extracted residential and workplace locations, we associate a Geographic Identifier (GEOID), which is a numeric code that uniquely identifies an administrative geographic area in the US Census data.\nTo be able to assign a job to each user, we match the residential and workplace GEOIDs to the GEOIDs of the Longitudinal Employer-Household Dynamics Origin-Destination Employment Statistics (LODES) datasets [37 ###reference_b37###]. Given the residential/workplace locations, these datasets provide statistics about the number of jobs in each sector as defined by the North American Industry Classification System (NAICS) [38 ###reference_b38###].\nThe information of the LODES datasets is organized into i) Residence Area Characteristics (RAC); ii) Workplace Area Characteristics (WAC); iii) Origin-Destination (OD).\nRAC and WAC datasets provide job statistics according to the residential and workplace census block groups respectively. The OD dataset provides job statistics considering both residential and workplace census blocks (for further details on the LODES datasets refer to SI S1).\nFrom these three datasets, we then compute the probabilities of working in a particular NAICS sector by normalizing the number of jobs in each NAICS sector by the total number of jobs.\nFinally, for each individual and the combination of their residential and workplace GEOIDs, we assign an industrial sector in probability. The probability of working in a specific sector, , is computed for each user with a home location and a work location, as the joint probability of independently working in that specific sector, given that resides in and works in (their home and work locations respectively):\nwhere is the residential GEOID and is the workplace GEOID as provided in the LODES data.\nBootstrapping is applied for a more robust NAICS assignment to individuals. Specifically, for each individual, we sample times from the corresponding NAICS probability distribution and retain all the sampled NAICS as independent realisations of a GEOID-representative population."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.4",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Employment status inference",
|
| 87 |
+
"text": "To infer the employment status of a bootstrapped user, we leverage information on the reduction in i) workplace visits and ii) time spent at work."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.4.1",
|
| 91 |
+
"parent_section_id": "4.4",
|
| 92 |
+
"section_name": "is at risk of unemployment?",
|
| 93 |
+
"text": "We define an individual to be at risk of losing their job if they never visited their work location.\nSince we are interested in studying the impact of job loss at the individual level, we restrict our analysis to individuals who were employed before the pandemic declaration. Using the pre-pandemic period as a baseline period makes it possible to investigate the shock induced by the pandemic on the job market. Specifically, we retain only users who have been working at least 5 days during the baseline period (), namely the period before the pandemic (January 3rd - March 7th, 2020).\nWe then compute the reduction of workplace visits and identify as \u201cat risk\u201d those who at a specific time window didn\u2019t visit their workplace.\nTo identify the population at risk of unemployment over time, we used a time window of days with a daily shift. Thus, at each time , we define an individual to be at risk of unemployment, , if they didn\u2019t visit their work location in the entire time window:\nwith representing the number of visits a user made to their work location within the time window ."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.4.2",
|
| 97 |
+
"parent_section_id": "4.4",
|
| 98 |
+
"section_name": "is working remotely?",
|
| 99 |
+
"text": "Under the assumption that individuals at risk of unemployment () could be working remotely, we assign to each individual the likelihood of working remotely based both on their personal and other workers\u2019 (in the same NAICS) working behaviour changes. Individual working behaviour change is measured in terms of the reduction in the time spent at work with respect to the baseline period:\nHere is the time the user spent at the workplace in the time window , and is the median of the time spent at work (within windows of the same size as ) during the baseline period (January 3rd - March 7th, 2020).\nBy additionally adjusting for how much the entire sector is working remotely during a specific window compared the estimated maximum amount of time that can be worked remotely [40 ###reference_b40###], we can write the probability of being unemployed as:\nwhere represents the weighted fraction of remaining work time that could be performed remotely by those individuals who stopped visiting their work location, and represent the fraction of work that could be performed remotely adjusted by potential changes in remote working behaviour among those individuals who didn\u2019t interrupt visiting their workplace (for additional details see SI S3A).\nIntuitively, by estimating at the sector level the reduction in the time spent at work by users who are still visiting their work location (), we measure how much \u201cremote work\u201d is already performed by individuals who are still visiting their workplace. The remaining part of the remote-workable time (if any), , is used to uniformly distribute the probability of being unemployed among individuals who stopped visiting their workplace (for further details see SI S3A)."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.5",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "Mobility metrics",
|
| 105 |
+
"text": "To track the changes in mobility of employed and unemployed individuals, we measure within-individual variations by comparing mobility behaviours to a baseline period preceding the pandemic (February 1st, 2020 - March 7th, 2020). The mobility metrics we used in our analysis offer a comprehensive picture of mobility behaviour including individuals\u2019 characteristic displacement and the complexity of individuals\u2019 exploratory behaviour. To track the changes over time, we computed these metrics over a moving window of 28 days with a 1-day shift."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.5.1",
|
| 109 |
+
"parent_section_id": "4.5",
|
| 110 |
+
"section_name": "Time Allocation Entropy",
|
| 111 |
+
"text": "We introduce the time allocation entropy, which measures the distribution of time allocation in each visited location by an individual, as a measure of exploratory behaviour:\n\nHere, is the total number of unique visited locations of an individual , and is the total time individual spends in location (weighted by time spent)."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.5.2",
|
| 115 |
+
"parent_section_id": "4.5",
|
| 116 |
+
"section_name": "Radius of gyration",
|
| 117 |
+
"text": "To measure the characteristic geographical displacement of individuals, we use the well-known radius of gyration [6 ###reference_b6###, 7 ###reference_b7###] defined as:\n,\nwhere is the total time spent by a particular individual to all their visited locations; is the time spent to location ; is the set of stop locations within a time window; is a two-dimensional vector representing the location\u2019s GPS position recorded as latitude and longitude; and is the centre of mass of the trajectories, defined as ."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.5.3",
|
| 121 |
+
"parent_section_id": "4.5",
|
| 122 |
+
"section_name": "Capacity",
|
| 123 |
+
"text": "We capture and track the number of an individual\u2019s familiar locations following the definition of Alessandretti et al. [34 ###reference_b34###]. For each individual, we compute the location capacity in each time window, normalized by the mean capacity of all users during the baseline period before the pandemic (January 3rd - March 7th, 2020).\nTogether with the capacity , we also computed the number of locations added to () and deleted from () the set of familiar locations within a specific time interval and the previous time interval [34 ###reference_b34###]."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5",
|
| 127 |
+
"parent_section_id": null,
|
| 128 |
+
"section_name": "Data and code availability",
|
| 129 |
+
"text": "The data supporting the findings of this study are accessible through Cuebiq\u2019s Data for Good initiative. For details on how to request access, including conditions and limitations, please visit: https://www.cuebiq.com/about/data-for-good/ ###reference_/###.\nReplication code is available on GitHub at https://github.com/scentellegher/ImpactJobLoss/ ###reference_Loss/###."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "6",
|
| 133 |
+
"parent_section_id": null,
|
| 134 |
+
"section_name": "Acknowledgements",
|
| 135 |
+
"text": "The authors would like to thank Cuebiq that kindly provided us with the mobility dataset for this research through their Data for Good program.\nL.L. thanks G.K. for the insightful discussions and his support during the entire project development.\nL.L. has been supported by the ERC project \u201cIMMUNE\u201d (Grant agreement ID: 101003183). L.L. acknowledges the support from the \u201cFondazione Romeo ed Enrica Invernizzi\u201d for the research activities of the \u2019Covid Crisis Lab\u2019 at Bocconi University.\nS.C. and B.L. have been supported by the PNRR ICSC National Research Centre for High Performance Computing, Big Data and Quantum Computing (CN00000013), under the NRRP MUR program funded by the NextGenerationEU."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "7",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "Author contributions statement",
|
| 141 |
+
"text": "L.L., S.C., M.D.N. conceived the original idea and planned the experiments. S.C., L.L. and M.D.N. pre-processed the mobility data. S.C., L.L. and M.T. carried out the experiments and made the Figures. L.L., S.C. and M.D.N. contributed to the interpretation of the results. L.L. and S.C. wrote the manuscript. S.C., M.D.N., M.T., B.L., and L.L. provided critical feedback, helped shape the manuscript and substantively revised it."
|
| 142 |
+
}
|
| 143 |
+
],
|
| 144 |
+
"appendix": [],
|
| 145 |
+
"tables": {},
|
| 146 |
+
"image_paths": {
|
| 147 |
+
"1": {
|
| 148 |
+
"figure_path": "2403.10276v2_figure_1.png",
|
| 149 |
+
"caption": "Figure 1: Employment status detection algorithm. Overview of the developed procedure to detect unemployment. (A) Stop locations detection (individual stopped in a 65-meter radius and stayed for at least 5 minutes); (B) Workplace and Residential Census Block Groups (CBGs) detection; (C) Job Assignment, a NAICS sector is assigned in probability given the individual\u2019s Workplace and Residential CBGs; (D) the Risk of Unemployment is computed for each individual based on their workplace visits; (E) Remote working correction based on the teleworkability of the individual job (NAICS sector); (F) For each individual we have their full employment status over time. Icons: Fontawesome, Flaticon, Maps: Stamen Maps.",
|
| 150 |
+
"url": "http://arxiv.org/html/2403.10276v2/x1.png"
|
| 151 |
+
},
|
| 152 |
+
"2": {
|
| 153 |
+
"figure_path": "2403.10276v2_figure_2.png",
|
| 154 |
+
"caption": "Figure 2: Impact of the pandemic on employed and unemployed mobility. Percentage changes with respect to the baseline period (February 1st - March 7th) in extra-work individuals\u2019 mobility patterns for employed and unemployed groups, and their difference over time, as measured by different mobility metrics. (A) Radius of gyration (rgsubscript\ud835\udc5f\ud835\udc54r_{g}italic_r start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT) and the corresponding difference between the groups of employed and unemployed over time; (B) Time allocation entropy (H\ud835\udc3bHitalic_H), which measures the distribution of time allocation in each visited location; (C) Capacity C\ud835\udc36Citalic_C which represents the number of a user\u2019s familiar locations; and (D) the number of added A\ud835\udc34Aitalic_A and deleted D\ud835\udc37Ditalic_D locations between consecutive windows.\nEach mobility metric is computed over a window of 28 days with a 1-day shift. The grey vertical line represents the WHO COVID-19 pandemic declaration (March 11th, 2020).\nMobility patterns are reported only for individuals visiting their work location at least once within the grey shaded area period.",
|
| 155 |
+
"url": "http://arxiv.org/html/2403.10276v2/x2.png"
|
| 156 |
+
},
|
| 157 |
+
"3": {
|
| 158 |
+
"figure_path": "2403.10276v2_figure_3.png",
|
| 159 |
+
"caption": "Figure 3: Individual-level mobility behaviour after job loss using employed population as reference group. The results show the lasting impact of prolonged periods of unemployment on individual-level mobility behaviour.\nEach individual\u2019s mobility indicator, which includes (A) the radius of gyration, (B) the time allocation entropy, (C) the capacity C\ud835\udc36Citalic_C, and (D) the added A\ud835\udc34Aitalic_A and deleted D\ud835\udc37Ditalic_D locations over time, is standardized by calculating the z-score using the average and standard deviation of the employed group\u2019s indicators on a specific day t\ud835\udc61titalic_t. Then, time is aligned such that at time t=0\ud835\udc610t=0italic_t = 0, individuals have lost their jobs. Shaded areas represent the 2-standard-deviation range.",
|
| 160 |
+
"url": "http://arxiv.org/html/2403.10276v2/x3.png"
|
| 161 |
+
},
|
| 162 |
+
"4": {
|
| 163 |
+
"figure_path": "2403.10276v2_figure_4.png",
|
| 164 |
+
"caption": "Figure 4: Demographic variations in mobility among individuals enduring a prolonged period of unemployment. Differences in mobility behaviour of unemployed individuals in the latter stages of unemployment (between 30 and 100 days) for Sex, Age, Income, Race and Education demographics compared to the mobility indicators of the reference group of employed individuals. For each metric and demographic group, we provide the mean and standard error of each group in the 30-100 days period after the job loss.",
|
| 165 |
+
"url": "http://arxiv.org/html/2403.10276v2/x4.png"
|
| 166 |
+
}
|
| 167 |
+
},
|
| 168 |
+
"validation": true,
|
| 169 |
+
"references": [
|
| 170 |
+
{
|
| 171 |
+
"1": {
|
| 172 |
+
"title": "How the Government Measures Unemployment.",
|
| 173 |
+
"author": "U.S. Bureau of Labor Statistics.",
|
| 174 |
+
"venue": "https://www.bls.gov/cps/cps_htgm.htm, 2023.",
|
| 175 |
+
"url": null
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"2": {
|
| 180 |
+
"title": "The evolution of rotation group bias: Will the real unemployment rate\nplease stand up?",
|
| 181 |
+
"author": "Alan B Krueger, Alexandre Mas, and Xiaotong Niu.",
|
| 182 |
+
"venue": "Review of Economics and Statistics, 99(2):258\u2013264, 2017.",
|
| 183 |
+
"url": null
|
| 184 |
+
}
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"3": {
|
| 188 |
+
"title": "Labour Force Survey performance and quality monitoring report: April\nto June 2023.",
|
| 189 |
+
"author": "Office for National Statistics (ONS).",
|
| 190 |
+
"venue": "https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/methodologies/labourforcesurveyperformanceandqualitymonitoringreport,\n2023.",
|
| 191 |
+
"url": null
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"4": {
|
| 196 |
+
"title": "World employment and social outlook: Trends 2015.",
|
| 197 |
+
"author": "International Labour Office.",
|
| 198 |
+
"venue": "International Labour Organization Geneva, 2015.",
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"5": {
|
| 204 |
+
"title": "World employment and social outlook: trends 2022, 2022.",
|
| 205 |
+
"author": "Sabina Dewan, Ekkehard Ernst, and Souleima Achkar Hilal.",
|
| 206 |
+
"venue": null,
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"6": {
|
| 212 |
+
"title": "Understanding individual human mobility patterns.",
|
| 213 |
+
"author": "Marta C Gonzalez, Cesar A Hidalgo, and Albert-Laszlo Barabasi.",
|
| 214 |
+
"venue": "nature, 453(7196):779\u2013782, 2008.",
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"7": {
|
| 220 |
+
"title": "Returners and explorers dichotomy in human mobility.",
|
| 221 |
+
"author": "Luca Pappalardo, Filippo Simini, Salvatore Rinzivillo, Dino Pedreschi, Fosca\nGiannotti, and Albert-L\u00e1szl\u00f3 Barab\u00e1si.",
|
| 222 |
+
"venue": "Nature communications, 6(1):8166, 2015.",
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"8": {
|
| 228 |
+
"title": "Following the footsteps of giants: modeling the mobility of\nhistorically notable individuals using wikipedia.",
|
| 229 |
+
"author": "Lorenzo Lucchini, Sara Tonelli, and Bruno Lepri.",
|
| 230 |
+
"venue": "EPJ Data Science, 8(1):36, 2019.",
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"9": {
|
| 236 |
+
"title": "The scales of human mobility.",
|
| 237 |
+
"author": "Laura Alessandretti, Ulf Aslak, and Sune Lehmann.",
|
| 238 |
+
"venue": "Nature, 587(7834):402\u2013407, 2020.",
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"10": {
|
| 244 |
+
"title": "Money walks: implicit mobility behavior and financial well-being.",
|
| 245 |
+
"author": "Vivek Kumar Singh, Burcin Bozkaya, and Alex Pentland.",
|
| 246 |
+
"venue": "PloS one, 10(8):e0136628, 2015.",
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"11": {
|
| 252 |
+
"title": "Inferring psychological traits from spending categories and dynamic\nconsumption patterns.",
|
| 253 |
+
"author": "Natkamon Tovanich, Simone Centellegher, Nac\u00e9ra Bennacer Seghouani, Joe\nGladstone, Sandra Matz, and Bruno Lepri.",
|
| 254 |
+
"venue": "EPJ Data Science, 10(1):24, 2021.",
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"12": {
|
| 260 |
+
"title": "Social bridges in urban purchase behavior.",
|
| 261 |
+
"author": "Xiaowen Dong, Yoshihiko Suhara, Bur\u00e7in Bozkaya, Vivek K Singh, Bruno\nLepri, and Alex \u2018Sandy\u2019 Pentland.",
|
| 262 |
+
"venue": "ACM Transactions on Intelligent Systems and Technology (TIST),\n9(3):1\u201329, 2017.",
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"13": {
|
| 268 |
+
"title": "Money buys happiness when spending fits our personality.",
|
| 269 |
+
"author": "Sandra C Matz, Joe J Gladstone, and David Stillwell.",
|
| 270 |
+
"venue": "Psychological science, 27(5):715\u2013725, 2016.",
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"14": {
|
| 276 |
+
"title": "From reddit to wall street: The role of committed minorities in\nfinancial collective action.",
|
| 277 |
+
"author": "Lorenzo Lucchini, Luca Maria Aiello, Laura Alessandretti, Gianmarco\nDe Francisci Morales, Michele Starnini, and Andrea Baronchelli.",
|
| 278 |
+
"venue": "Royal Society Open Science, 9(4):211488, 2022.",
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"15": {
|
| 284 |
+
"title": "Cities through the prism of people\u2019s spending behavior.",
|
| 285 |
+
"author": "Stanislav Sobolevsky, Izabela Sitko, Remi Tachet des Combes, Bartosz Hawelka,\nJuan Murillo Arias, and Carlo Ratti.",
|
| 286 |
+
"venue": "PloS one, 11(2):e0146291, 2016.",
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"16": {
|
| 292 |
+
"title": "Urban mobility and neighborhood isolation in america\u2019s 50 largest\ncities.",
|
| 293 |
+
"author": "Qi Wang, Nolan Edward Phillips, Mario L Small, and Robert J Sampson.",
|
| 294 |
+
"venue": "Proceedings of the National Academy of Sciences,\n115(30):7735\u20137740, 2018.",
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"17": {
|
| 300 |
+
"title": "Social capital i: measurement and associations with economic\nmobility.",
|
| 301 |
+
"author": "Raj Chetty, Matthew O Jackson, Theresa Kuchler, Johannes Stroebel, Nathaniel\nHendren, Robert B Fluegge, Sara Gong, Federico Gonzalez, Armelle Grondin,\nMatthew Jacob, et al.",
|
| 302 |
+
"venue": "Nature, 608(7921):108\u2013121, 2022.",
|
| 303 |
+
"url": null
|
| 304 |
+
}
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"18": {
|
| 308 |
+
"title": "Social capital ii: determinants of economic connectedness.",
|
| 309 |
+
"author": "Raj Chetty, Matthew O Jackson, Theresa Kuchler, Johannes Stroebel, Nathaniel\nHendren, Robert B Fluegge, Sara Gong, Federico Gonzalez, Armelle Grondin,\nMatthew Jacob, et al.",
|
| 310 |
+
"venue": "Nature, 608(7921):122\u2013134, 2022.",
|
| 311 |
+
"url": null
|
| 312 |
+
}
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"19": {
|
| 316 |
+
"title": "Mobility patterns are associated with experienced income segregation\nin large us cities.",
|
| 317 |
+
"author": "Esteban Moro, Dan Calacci, Xiaowen Dong, and Alex Pentland.",
|
| 318 |
+
"venue": "Nature communications, 12(1):4633, 2021.",
|
| 319 |
+
"url": null
|
| 320 |
+
}
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"20": {
|
| 324 |
+
"title": "Behavioral changes during the covid-19 pandemic decreased income\ndiversity of urban encounters.",
|
| 325 |
+
"author": "Takahiro Yabe, Bernardo Garc\u00eda Bulle Bueno, Xiaowen Dong, Alex Pentland,\nand Esteban Moro.",
|
| 326 |
+
"venue": "Nature Communications, 14(1):2310, 2023.",
|
| 327 |
+
"url": null
|
| 328 |
+
}
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"21": {
|
| 332 |
+
"title": "Crime rate inference with big data.",
|
| 333 |
+
"author": "Hongjian Wang, Daniel Kifer, Corina Graif, and Zhenhui Li.",
|
| 334 |
+
"venue": "In Proceedings of the 22nd ACM SIGKDD international conference\non knowledge discovery and data mining, pages 635\u2013644, 2016.",
|
| 335 |
+
"url": null
|
| 336 |
+
}
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"22": {
|
| 340 |
+
"title": "Crime feeds on legal activities: Daily mobility flows help to explain\nthieves\u2019 target location choices.",
|
| 341 |
+
"author": "Guangwen Song, Wim Bernasco, Lin Liu, Luzi Xiao, Suhong Zhou, and Weiwei Liao.",
|
| 342 |
+
"venue": "Journal of Quantitative Criminology, 35:831\u2013854, 2019.",
|
| 343 |
+
"url": null
|
| 344 |
+
}
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"23": {
|
| 348 |
+
"title": "Crime, inequality and public health: A survey of emerging trends in\nurban data science.",
|
| 349 |
+
"author": "Massimiliano Luca, Gian Maria Campedelli, Simone Centellegher, Michele Tizzoni,\nand Bruno Lepri.",
|
| 350 |
+
"venue": "Frontiers in Big Data, 6:50, 2023.",
|
| 351 |
+
"url": null
|
| 352 |
+
}
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"24": {
|
| 356 |
+
"title": "Quantifying the impact of human mobility on malaria.",
|
| 357 |
+
"author": "Amy Wesolowski, Nathan Eagle, Andrew J Tatem, David L Smith, Abdisalan M Noor,\nRobert W Snow, and Caroline O Buckee.",
|
| 358 |
+
"venue": "Science, 338(6104):267\u2013270, 2012.",
|
| 359 |
+
"url": null
|
| 360 |
+
}
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"25": {
|
| 364 |
+
"title": "Mobile phone data for informing public health actions across the\ncovid-19 pandemic life cycle.",
|
| 365 |
+
"author": "Nuria Oliver, Bruno Lepri, Harald Sterly, Renaud Lambiotte, S\u00e9bastien\nDeletaille, Marco De Nadai, Emmanuel Letouz\u00e9, Albert Ali Salah, Richard\nBenjamins, Ciro Cattuto, et al.",
|
| 366 |
+
"venue": "Science advances, 6(23):eabc0764, 2020.",
|
| 367 |
+
"url": null
|
| 368 |
+
}
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"26": {
|
| 372 |
+
"title": "The effect of human mobility and control measures on the covid-19\nepidemic in china.",
|
| 373 |
+
"author": "Moritz UG Kraemer, Chia-Hung Yang, Bernardo Gutierrez, Chieh-Hsi Wu, Brennan\nKlein, David M Pigott, Open COVID-19 Data Working Group\u2020, Louis Du Plessis,\nNuno R Faria, Ruoran Li, et al.",
|
| 374 |
+
"venue": "Science, 368(6490):493\u2013497, 2020.",
|
| 375 |
+
"url": null
|
| 376 |
+
}
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"27": {
|
| 380 |
+
"title": "Modelling the impact of testing, contact tracing and household\nquarantine on second waves of covid-19.",
|
| 381 |
+
"author": "Alberto Aleta, David Martin-Corral, Ana Pastore y Piontti, Marco Ajelli, Maria\nLitvinova, Matteo Chinazzi, Natalie E Dean, M Elizabeth Halloran, Ira M\nLongini Jr, Stefano Merler, et al.",
|
| 382 |
+
"venue": "Nature Human Behaviour, 4(9):964\u2013971, 2020.",
|
| 383 |
+
"url": null
|
| 384 |
+
}
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"28": {
|
| 388 |
+
"title": "Nowcasting unemployment rates with smartphone gps data.",
|
| 389 |
+
"author": "Daisuke Moriwaki.",
|
| 390 |
+
"venue": "In Multiple-Aspect Analysis of Semantic Trajectories: First\nInternational Workshop, MASTER 2019, Held in Conjunction with ECML-PKDD 2019,\nW\u00fcrzburg, Germany, September 16, 2019, Proceedings 1, pages 21\u201333.\nSpringer, 2020.",
|
| 391 |
+
"url": null
|
| 392 |
+
}
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"29": {
|
| 396 |
+
"title": "Estimating individual employment status using mobile phone network\ndata.",
|
| 397 |
+
"author": "P\u00e5l Sunds\u00f8y, Johannes Bjelland, Bj\u00f8rn-Atle Reme, Eaman Jahani, Erik\nWetter, and Linus Bengtsson.",
|
| 398 |
+
"venue": "arXiv preprint arXiv:1612.03870, 2016.",
|
| 399 |
+
"url": null
|
| 400 |
+
}
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"30": {
|
| 404 |
+
"title": "Tracking employment shocks using mobile phone data.",
|
| 405 |
+
"author": "Jameson L Toole, Yu-Ru Lin, Erich Muehlegger, Daniel Shoag, Marta C\nGonz\u00e1lez, and David Lazer.",
|
| 406 |
+
"venue": "Journal of The Royal Society Interface, 12(107):20150185, 2015.",
|
| 407 |
+
"url": null
|
| 408 |
+
}
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"31": {
|
| 412 |
+
"title": "Mobile communication signatures of unemployment.",
|
| 413 |
+
"author": "Abdullah Almaatouq, Francisco Prieto-Castrillo, and Alex Pentland.",
|
| 414 |
+
"venue": "In Social Informatics: 8th International Conference, SocInfo\n2016, Bellevue, WA, USA, November 11-14, 2016, Proceedings, Part I 8, pages\n407\u2013418. Springer, 2016.",
|
| 415 |
+
"url": null
|
| 416 |
+
}
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"32": {
|
| 420 |
+
"title": "Uncovering the socioeconomic facets of human mobility.",
|
| 421 |
+
"author": "Hugo Barbosa, Surendra Hazarie, Brian Dickinson, Aleix Bassolas, Adam Frank,\nHenry Kautz, Adam Sadilek, Jos\u00e9 J Ramasco, and Gourab Ghoshal.",
|
| 422 |
+
"venue": "Scientific reports, 11(1):8616, 2021.",
|
| 423 |
+
"url": null
|
| 424 |
+
}
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"33": {
|
| 428 |
+
"title": "A tale of many cities: universal patterns in human urban mobility.",
|
| 429 |
+
"author": "Anastasios Noulas, Salvatore Scellato, Renaud Lambiotte, Massimiliano Pontil,\nand Cecilia Mascolo.",
|
| 430 |
+
"venue": "PloS one, 7(5):e37027, 2012.",
|
| 431 |
+
"url": null
|
| 432 |
+
}
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"34": {
|
| 436 |
+
"title": "Evidence for a conserved quantity in human mobility.",
|
| 437 |
+
"author": "Laura Alessandretti, Piotr Sapiezynski, Vedran Sekara, Sune Lehmann, and Andrea\nBaronchelli.",
|
| 438 |
+
"venue": "Nature human behaviour, 2(7):485\u2013491, 2018.",
|
| 439 |
+
"url": null
|
| 440 |
+
}
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"35": {
|
| 444 |
+
"title": "The universal visitation law of human mobility.",
|
| 445 |
+
"author": "Markus Schl\u00e4pfer, Lei Dong, Kevin O\u2019Keeffe, Paolo Santi, Michael Szell,\nHadrien Salat, Samuel Anklesaria, Mohammad Vazifeh, Carlo Ratti, and\nGeoffrey B West.",
|
| 446 |
+
"venue": "Nature, 593(7860):522\u2013527, 2021.",
|
| 447 |
+
"url": null
|
| 448 |
+
}
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"36": {
|
| 452 |
+
"title": "Human mobility: Models and applications.",
|
| 453 |
+
"author": "Hugo Barbosa, Marc Barthelemy, Gourab Ghoshal, Charlotte R James, Maxime\nLenormand, Thomas Louail, Ronaldo Menezes, Jos\u00e9 J Ramasco, Filippo\nSimini, and Marcello Tomasini.",
|
| 454 |
+
"venue": "Physics Reports, 734:1\u201374, 2018.",
|
| 455 |
+
"url": null
|
| 456 |
+
}
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"37": {
|
| 460 |
+
"title": "Data - longitudinal employer-household dynamics.",
|
| 461 |
+
"author": "Center for Economic Studies US Census Bureau.",
|
| 462 |
+
"venue": "Accessed on 2024-02-02.",
|
| 463 |
+
"url": null
|
| 464 |
+
}
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"38": {
|
| 468 |
+
"title": "Accessed on 2024-02-02.",
|
| 469 |
+
"author": "North american industry classification system (naics) u.s. census bureau.",
|
| 470 |
+
"venue": null,
|
| 471 |
+
"url": null
|
| 472 |
+
}
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"39": {
|
| 476 |
+
"title": "Glossary.",
|
| 477 |
+
"author": "US Census Bureau.",
|
| 478 |
+
"venue": "Accessed on 2024-02-02.",
|
| 479 |
+
"url": null
|
| 480 |
+
}
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"40": {
|
| 484 |
+
"title": "How many jobs can be done at home?",
|
| 485 |
+
"author": "Jonathan I Dingel and Brent Neiman.",
|
| 486 |
+
"venue": "Journal of Public Economics, 189:104235, 2020.",
|
| 487 |
+
"url": null
|
| 488 |
+
}
|
| 489 |
+
},
|
| 490 |
+
{
|
| 491 |
+
"41": {
|
| 492 |
+
"title": "Socioeconomic disparities in mobility behavior during the covid-19\npandemic in developing countries.",
|
| 493 |
+
"author": "Lorenzo Lucchini, Ollin Langle-Chimal, Lorenzo Candeago, Lucio Melito, Alex\nChunet, Aleister Montfort, Bruno Lepri, Nancy Lozano-Gracia, and Samuel P\nFraiberger.",
|
| 494 |
+
"venue": "arXiv preprint arXiv:2305.06888, 2023.",
|
| 495 |
+
"url": null
|
| 496 |
+
}
|
| 497 |
+
},
|
| 498 |
+
{
|
| 499 |
+
"42": {
|
| 500 |
+
"title": "Limits of predictability in human mobility.",
|
| 501 |
+
"author": "Chaoming Song, Zehui Qu, Nicholas Blumm, and Albert-L\u00e1szl\u00f3\nBarab\u00e1si.",
|
| 502 |
+
"venue": "Science, 327(5968):1018\u20131021, 2010.",
|
| 503 |
+
"url": null
|
| 504 |
+
}
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"43": {
|
| 508 |
+
"title": "Social media fingerprints of unemployment.",
|
| 509 |
+
"author": "Alejandro Llorente, Manuel Garcia-Herranz, Manuel Cebrian, and Esteban Moro.",
|
| 510 |
+
"venue": "PloS one, 10(5):e0128692, 2015.",
|
| 511 |
+
"url": null
|
| 512 |
+
}
|
| 513 |
+
},
|
| 514 |
+
{
|
| 515 |
+
"44": {
|
| 516 |
+
"title": "The generalization of \u2018student\u2019s\u2019problem when several different\npopulation varlances are involved.",
|
| 517 |
+
"author": "Bernard L Welch.",
|
| 518 |
+
"venue": "Biometrika, 34(1-2):28\u201335, 1947.",
|
| 519 |
+
"url": null
|
| 520 |
+
}
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"45": {
|
| 524 |
+
"title": "Influence of sociodemographic characteristics on human mobility.",
|
| 525 |
+
"author": "Maxime Lenormand, Thomas Louail, Oliva G Cant\u00fa-Ros, Miguel Picornell,\nRicardo Herranz, Juan Murillo Arias, Marc Barthelemy, Maxi San Miguel, and\nJos\u00e9 J Ramasco.",
|
| 526 |
+
"venue": "Scientific reports, 5(1):10075, 2015.",
|
| 527 |
+
"url": null
|
| 528 |
+
}
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"46": {
|
| 532 |
+
"title": "Gender gaps in urban mobility.",
|
| 533 |
+
"author": "Laetitia Gauvin, Michele Tizzoni, Simone Piaggesi, Andrew Young, Natalia Adler,\nStefaan Verhulst, Leo Ferres, and Ciro Cattuto.",
|
| 534 |
+
"venue": "Humanities and Social Sciences Communications, 7(1):1\u201313,\n2020.",
|
| 535 |
+
"url": null
|
| 536 |
+
}
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"47": {
|
| 540 |
+
"title": "Socio-economic determinants of mobility responses during the first\nwave of covid-19 in italy: from provinces to neighbourhoods.",
|
| 541 |
+
"author": "Laetitia Gauvin, Paolo Bajardi, Emanuele Pepe, Brennan Lake, Filippo Privitera,\nand Michele Tizzoni.",
|
| 542 |
+
"venue": "Journal of The Royal Society Interface, 18(181):20210092, 2021.",
|
| 543 |
+
"url": null
|
| 544 |
+
}
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"48": {
|
| 548 |
+
"title": "High-resolution human mobility data reveal race and wealth\ndisparities in disaster evacuation patterns.",
|
| 549 |
+
"author": "Hengfang Deng, Daniel P Aldrich, Michael M Danziger, Jianxi Gao, Nolan E\nPhillips, Sean P Cornelius, and Qi Ryan Wang.",
|
| 550 |
+
"venue": "Humanities and Social Sciences Communications, 8(1):1\u20138, 2021.",
|
| 551 |
+
"url": null
|
| 552 |
+
}
|
| 553 |
+
},
|
| 554 |
+
{
|
| 555 |
+
"49": {
|
| 556 |
+
"title": "Modelling the scaling properties of human mobility.",
|
| 557 |
+
"author": "Chaoming Song, Tal Koren, Pu Wang, and Albert-L\u00e1szl\u00f3 Barab\u00e1si.",
|
| 558 |
+
"venue": "Nature physics, 6(10):818\u2013823, 2010.",
|
| 559 |
+
"url": null
|
| 560 |
+
}
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"50": {
|
| 564 |
+
"title": "A universal model for mobility and migration patterns.",
|
| 565 |
+
"author": "Filippo Simini, Marta C Gonz\u00e1lez, Amos Maritan, and Albert-L\u00e1szl\u00f3\nBarab\u00e1si.",
|
| 566 |
+
"venue": "Nature, 484(7392):96\u2013100, 2012.",
|
| 567 |
+
"url": null
|
| 568 |
+
}
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"51": {
|
| 572 |
+
"title": "Intervening opportunities: A theory relating mobility and distance.",
|
| 573 |
+
"author": "Samuel A. Stouffer.",
|
| 574 |
+
"venue": "American Sociological Review, 5(6):845\u2013867, 1940.",
|
| 575 |
+
"url": null
|
| 576 |
+
}
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"52": {
|
| 580 |
+
"title": "The gravity model in transportation analysis: theory and\nextensions, volume 3.",
|
| 581 |
+
"author": "Sven Erlander and Neil F Stewart.",
|
| 582 |
+
"venue": "Vsp, 1990.",
|
| 583 |
+
"url": null
|
| 584 |
+
}
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"53": {
|
| 588 |
+
"title": "Human mobility modelling: exploration and preferential return meet\nthe gravity model.",
|
| 589 |
+
"author": "Luca Pappalardo, Salvatore Rinzivillo, and Filippo Simini.",
|
| 590 |
+
"venue": "Procedia Computer Science, 83:934\u2013939, 2016.",
|
| 591 |
+
"url": null
|
| 592 |
+
}
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"54": {
|
| 596 |
+
"title": "Individual consequences of job loss and unemployment.",
|
| 597 |
+
"author": "Karsten I Paul, Alice Hassel, and Klaus Moser.",
|
| 598 |
+
"venue": "Oxford handbook of job loss and job search, pages 57\u201385, 2018.",
|
| 599 |
+
"url": null
|
| 600 |
+
}
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"55": {
|
| 604 |
+
"title": "The far-reaching impact of job loss and unemployment.",
|
| 605 |
+
"author": "Jennie E Brand.",
|
| 606 |
+
"venue": "Annual review of sociology, 41:359\u2013375, 2015.",
|
| 607 |
+
"url": null
|
| 608 |
+
}
|
| 609 |
+
},
|
| 610 |
+
{
|
| 611 |
+
"56": {
|
| 612 |
+
"title": "The individual experience of unemployment.",
|
| 613 |
+
"author": "Connie R Wanberg.",
|
| 614 |
+
"venue": "Annual review of psychology, 63:369\u2013396, 2012.",
|
| 615 |
+
"url": null
|
| 616 |
+
}
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"57": {
|
| 620 |
+
"title": "Accessed on 2024-01-31.",
|
| 621 |
+
"author": "Monthly program and financial data, employment & training administration (eta)\n- u.s. department of labor.",
|
| 622 |
+
"venue": null,
|
| 623 |
+
"url": null
|
| 624 |
+
}
|
| 625 |
+
},
|
| 626 |
+
{
|
| 627 |
+
"58": {
|
| 628 |
+
"title": "The structure of borders in a small world.",
|
| 629 |
+
"author": "Christian Thiemann, Fabian Theis, Daniel Grady, Rafael Brune, and Dirk\nBrockmann.",
|
| 630 |
+
"venue": "PloS one, 5(11):e15422, 2010.",
|
| 631 |
+
"url": null
|
| 632 |
+
}
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"59": {
|
| 636 |
+
"title": "Unemployed and alone? unemployment and social participation in\neurope.",
|
| 637 |
+
"author": "Martina Dieckhoff and Vanessa Gash.",
|
| 638 |
+
"venue": "International Journal of Sociology and Social Policy,\n35(1/2):67\u201390, 2015.",
|
| 639 |
+
"url": null
|
| 640 |
+
}
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"60": {
|
| 644 |
+
"title": "Unemployment and social exclusion.",
|
| 645 |
+
"author": "Laura Pohlan.",
|
| 646 |
+
"venue": "Journal of Economic Behavior & Organization, 164:273\u2013299,\n2019.",
|
| 647 |
+
"url": null
|
| 648 |
+
}
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"61": {
|
| 652 |
+
"title": "Coupling human mobility and social ties.",
|
| 653 |
+
"author": "Jameson L Toole, Carlos Herrera-Yaq\u00fce, Christian M Schneider, and Marta C\nGonz\u00e1lez.",
|
| 654 |
+
"venue": "Journal of The Royal Society Interface, 12(105):20141128, 2015.",
|
| 655 |
+
"url": null
|
| 656 |
+
}
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"62": {
|
| 660 |
+
"title": "Living in a pandemic: changes in mobility routines, social activity\nand adherence to covid-19 protective measures.",
|
| 661 |
+
"author": "Lorenzo Lucchini, Simone Centellegher, Luca Pappalardo, Riccardo Gallotti,\nFilippo Privitera, Bruno Lepri, and Marco De Nadai.",
|
| 662 |
+
"venue": "Scientific reports, 11(1):24452, 2021.",
|
| 663 |
+
"url": null
|
| 664 |
+
}
|
| 665 |
+
},
|
| 666 |
+
{
|
| 667 |
+
"63": {
|
| 668 |
+
"title": "Psychological and physical well-being during unemployment: a\nmeta-analytic study.",
|
| 669 |
+
"author": "Frances McKee-Ryan, Zhaoli Song, Connie R Wanberg, and Angelo J Kinicki.",
|
| 670 |
+
"venue": "Journal of applied psychology, 90(1):53, 2005.",
|
| 671 |
+
"url": null
|
| 672 |
+
}
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"64": {
|
| 676 |
+
"title": "Work, unemployment, and mental health.",
|
| 677 |
+
"author": "Peter Warr.",
|
| 678 |
+
"venue": "Oxford University Press, 1987.",
|
| 679 |
+
"url": null
|
| 680 |
+
}
|
| 681 |
+
},
|
| 682 |
+
{
|
| 683 |
+
"65": {
|
| 684 |
+
"title": "Unemployment and mental health: Some british studies.",
|
| 685 |
+
"author": "Peter Warr, Paul Jackson, and Michael Banks.",
|
| 686 |
+
"venue": "Journal of social issues, 44(4):47\u201368, 1988.",
|
| 687 |
+
"url": null
|
| 688 |
+
}
|
| 689 |
+
},
|
| 690 |
+
{
|
| 691 |
+
"66": {
|
| 692 |
+
"title": "The psychological impact of unemployment.",
|
| 693 |
+
"author": "Norman T Feather.",
|
| 694 |
+
"venue": "Springer Science & Business Media, 2012.",
|
| 695 |
+
"url": null
|
| 696 |
+
}
|
| 697 |
+
},
|
| 698 |
+
{
|
| 699 |
+
"67": {
|
| 700 |
+
"title": "Project lachesis: Parsing and modeling location histories.",
|
| 701 |
+
"author": "Ramaswamy Hariharan and Kentaro Toyama.",
|
| 702 |
+
"venue": "In Max J. Egenhofer, Christian Freksa, and Harvey J. Miller, editors,\nGeographic Information Science, pages 106\u2013124, Berlin, Heidelberg,\n2004. Springer Berlin Heidelberg.",
|
| 703 |
+
"url": null
|
| 704 |
+
}
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"68": {
|
| 708 |
+
"title": "A density-based algorithm for discovering clusters in large spatial\ndatabases with noise.",
|
| 709 |
+
"author": "Martin Ester, Hans-Peter Kriegel, J\u00f6rg Sander, Xiaowei Xu, et al.",
|
| 710 |
+
"venue": "In kdd, volume 96, 34, pages 226\u2013231, 1996.",
|
| 711 |
+
"url": null
|
| 712 |
+
}
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"69": {
|
| 716 |
+
"title": "The timegeo modeling framework for urban mobility without travel\nsurveys.",
|
| 717 |
+
"author": "Shan Jiang, Yingxiang Yang, Siddharth Gupta, Daniele Veneziano, Shounak\nAthavale, and Marta C Gonz\u00e1lez.",
|
| 718 |
+
"venue": "Proceedings of the National Academy of Sciences,\n113(37):E5370\u2013E5378, 2016.",
|
| 719 |
+
"url": null
|
| 720 |
+
}
|
| 721 |
+
},
|
| 722 |
+
{
|
| 723 |
+
"70": {
|
| 724 |
+
"title": "Percent of population who worked on weekdays and weekend days.",
|
| 725 |
+
"author": "Bureau of Labor Statistics, American Time Use Survey.",
|
| 726 |
+
"venue": "https://www.bls.gov/tus/charts/chart11.pdf, 2015.",
|
| 727 |
+
"url": null
|
| 728 |
+
}
|
| 729 |
+
},
|
| 730 |
+
{
|
| 731 |
+
"71": {
|
| 732 |
+
"title": "Business hours - wikipedia.",
|
| 733 |
+
"author": "Wikipedia.",
|
| 734 |
+
"venue": "Version access: 2023-12-24.",
|
| 735 |
+
"url": null
|
| 736 |
+
}
|
| 737 |
+
}
|
| 738 |
+
],
|
| 739 |
+
"url": "http://arxiv.org/html/2403.10276v2"
|
| 740 |
+
}
|
20241217/2403.13680v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2403.15698v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2404.02877v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2404.06825v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2405.08359v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2405.14877v2.json
ADDED
|
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Visual Deformation Detection Using Soft Material Simulation for Pre-training of Condition Assessment Models",
|
| 3 |
+
"abstract": "This paper addresses the challenge of geometric quality assurance in manufacturing, particularly when human assessment is required. It proposes using Blender, an open-source simulation tool, to create synthetic datasets for machine learning (ML) models. The process involves translating expert information into shape key parameters to simulate deformations, generating images for both deformed and non-deformed objects. The study explores the impact of discrepancies between real and simulated environments on ML model performance and investigates the effect of different simulation backgrounds on model sensitivity. Additionally, the study aims to enhance the model\u2019s robustness to camera positioning by generating datasets with a variety of randomized viewpoints. The entire process, from data synthesis to model training and testing, is implemented using a Python API interfacing with Blender. An experiment with a soda can object validates the accuracy of the proposed pipeline.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The process of geometrical quality assurance based on the coordinate measurement is an essential aspect of ensuring the accuracy and consistency of manufactured products in assembly lines. By utilizing this method, manufacturers can reduce production time and costs, while also increasing the overall quality of their precision products [1 ###reference_b1###]. The accuracy of geometrical quality inspection in industrial settings heavily relies on the visual or haptic assessment of human operators, which can be time-consuming and prone to errors [2 ###reference_b2###]. This can lead to significant delays and increased costs in the manufacturing process, making it a critical issue that needs to be addressed.\nHeuristic methods, such as conventional photogrammetry [3 ###reference_b3###], have been popular in industrial inspection, but they often lack generalizability and are limited by the complexity of the inspection task. As a result, there is a growing interest in the development of more advanced and automated inspection methods, such as computer vision and machine learning, that can provide more accurate and efficient solutions to quality assurance in industrial settings. Modern quality assurance methods powered by machine learning can solve this problem [4 ###reference_b4###], but require large datasets with diverse deformations for training [5 ###reference_b5###]. Generating a large and diverse dataset of deformations in physical objects can be a difficult and time-consuming task. Moreover, the physical limitations and constraints in object deformations might not cover all the possible scenarios, making it difficult to achieve a comprehensive dataset [6 ###reference_b6###].\nTherefore, alternative approaches, such as simulations, can provide a more efficient and versatile way of generating such datasets for machine learning-based quality assurance methods [7 ###reference_b7###]. While simulation is a viable option for creating training datasets, there have been efforts to explore data augmentation [8 ###reference_b8###] or generation methods like generative adversarial networks (GAN) [9 ###reference_b9###]. However, these methods are not as interpretable or controllable by expert knowledge, and may not provide the same level of meaningful diversity as simulations. Independent research has been conducted in this area, specifically aimed at automating the simulation of deformations [10 ###reference_b10###]. As a result, this technology is now available for data synthesis applications. This advancement has enhanced the efficiency and accuracy of data synthesis, enabling researchers to generate realistic simulations of deformations more quickly and reliably.\nWhile simulated input can be useful in training machine learning models, there is a risk that the model\u2019s performance will be degraded when transferred to a physical environment due to discrepancies with the real world. Therefore, it is important to consider the limitations of simulated input. One approach to improving the robustness of machine learning models in quality control applications is to add noise to images [11 ###reference_b11###]. This can help the model to better generalize and perform well on unseen data in the real world. However, it is important to carefully select the type and amount of noise to add, as too much noise can spoil the model\u2019s performance. Another factor to consider when working with machine vision is the camera position and calibration. In order to ensure faster and more accurate evaluations in the real world, it is imperative to reduce the sensitivity of machine vision to camera calibration [12 ###reference_b12###].\nGeometrical assessment of manufacturing products has taken advantage of a wide range of measurement techniques to accurately identify and classify deformities in objects. Researchers have explored different methods for shape measurement, including the use of laser optics [13 ###reference_b13###, 14 ###reference_b14###]. However, this technique can be expensive and requires specialized equipment, making it less accessible for some applications. An alternative approach that has gained popularity in recent years is the use of RGB-Depth (RGB-D) images for deformity detection and classification. These images capture both the color and depth information of objects, allowing for a more comprehensive analysis of their geometries. By leveraging machine learning algorithms and computer vision techniques, researchers have been able to identify different types of deformations in objects, such as rigid, elastic, plastic, etc., from images [15 ###reference_b15###].\nOn one hand, The advantage of using RGB-D images over laser optics is the ease and accessibility of data collection which allows for faster and more cost-effective data acquisition, making it a viable option for many industrial and manufacturing settings. On the other hand, using point-cloud laser optic measurement of objects in combination with image inputs has shown great accuracy suitable for real-time deformation detection in an industrial setting [16 ###reference_b16###]. Another approach in this regard is using conventional heuristic photogrammetry techniques to obtain 2D readings from 3D CAD models and compare the outcome with real 2D images taken from the object [3 ###reference_b3###]. Overall, our concern in this paper would be the availability and easier implementation to be able to address a wider range of applications. Therefore, exploiting raw CAD files and RGB images would be the priority over using other techniques.\nMachine learning manufacturing applications using image analysis have been a research topic for many years in the manufacturing and production industries [17 ###reference_b17###]. For instance, in a recent paper, a computer vision module was designed to substitute human judgment in wear analysis as a part of manufacturing assessments [18 ###reference_b18###]. Additionally, supervised learning although the predominant machine learning framework, is not the only one explored. A semi-supervised learning pipeline suggested by [19 ###reference_b19###], uses Auto-encoders to reduce the dependence of regular supervised-learning-based methods on huge labeled datasets. Although this improvement will facilitate the inclusion of unlabeled data in many cases, the total available data, whether labeled or not, might not always be enough. This necessitates using simulation-based data generation pipelines. The competence of such an approach has been verified in different problems of this field [20 ###reference_b20###], and in this paper, we proposed a pipeline for another application whose efficiency is validated in a simulated environment.\nWe make use of an open-source graphics simulation tool, to automate the creation of synthetic datasets for machine learning (ML) models. The graphics engine uses shape key parameters to simulate object deformities and generate datasets for classification models. Randomized camera viewpoints are used to reduce the need for labor-intensive camera calibration in industrial settings. An experiment is conducted to assess the performance of the proposed pipeline, and the results show promise for the sim-to-real transfer of ML models trained using this method. The proposed scheme is illustrated in Figure 1 ###reference_###. Its entire data synthesis, model training, and testing stages have been implemented using an integrated computer script.\nThe paper is organized as the following. First, we introduce the environment chosen for showcasing our proposed method in Section II ###reference_###. Next, we describe the data generation procedure in Section III ###reference_###. We introduce the machine learning model architecture and training process in Section IV ###reference_###. Finally, we conclude the paper with the discussion of results and conclusion in Sections V ###reference_### and VII ###reference_###.\n\n###figure_1###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Environment",
|
| 15 |
+
"text": "The environment used for the simulation is Blender. Blender is an open-source computer graphics, rendering, and simulation environment. Blender accepts most file types associated with computer-aided design (CAD) and is capable of animating deformations and defects. The Python interface allows for the automation of procedural deformations and defects, and various camera positions for creating a dataset from a CAD design. Blendtorch is a Python framework for integrating Blender into Pytorch for deep learning applications [21 ###reference_b21###]. Blendtorch was used to integrate the Blender Python scripting environment into an integrated development environment (IDE) with the rest of the project so rendered images could be used to automate dataset generation. A soda can was used to model deformation and dataset creation. The object was selected as it is easy to acquire a sizeable physical dataset for and easy to deform to create the deformed class. The metallic nature of the object also presents an interesting classification challenge.\nDeformation of the object was done by combining a lattice deformation and a displace modifier. A simple cube lattice was created around the can to control major deformations coined as crushes, pinches, folds, twists, and crunches. When the lattice is deformed it controls the resulting deformation in the soda can. Twelve lattice deform shape keys were created of the various macro deformation types. Movement and rotation deformations were applied to both the seal and tab of the soda cans and mapped to shape keys and make the cans appear open.\nSmaller deformations on the surface of the can were created by using a displace modifier. Vertex groups were used to limit these deformations to the side of the can leaving the top and bottom unaffected. The displace modifier slightly inflates the object and applies a texture to deform the surface. For the soda can object, the most realistic texture was a hard Stucci texture scaled to look similar to the sharp edges of a crinkled pop can. Three different hard Stucci textures were created and assigned to a corresponding shape key.\nBy combining the lattice deformation and displacement deformations, high quality procedural deformations can be created. The shading of the environment was set up to create a realistic looking can, with a UV mapped label applied to the side and an aluminum like metallic texture applied to the top and bottom.\nA dynamic procedural lighting environment was created using the Blender world shading module. An EXR file was randomly selected and given a random rotation to create realistic and different light and shadows on the can object. The background of the object was either set to black or green based on if the lighting would hit the object from the camera view. The cameras were placed in a uniform random polar position as shown in Table I ###reference_###. The images were rendered and saved as 512x512 RGB images. Later in the pipeline, these images were modified."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Dataset Construction and Post-Processing",
|
| 21 |
+
"text": "Images rendered in the Blender environment were used to create a dataset. A total of 6000 images of the cans were used, 3000 of which were of the non-deformed object and 3000 were deformed. In all virtual samples, the tab and seal of the soda cans are set to a random state between fully open and fully closed. The tab is always set to a state more closed than the seal to prevent impossible geometries and part clipping. Dataset samples of the non-deformed object is shown in Figure 2(b) ###reference_sf2###.\nIn the deformed class, 12 lattice deformations and 3 displacement deformations were combined. The displacement deformations were each assigned a randomized weight and combined in a way to ensure there was a procedural deformation on the surface of the object. For the lattice deformations, three or more types (crush, pinch, etc.) were randomly selected and combined. Each type was assigned a randomized weight to differ the extent each lattice deformation type makes on the object. For each lattice deformation type, various shape keys were mixed and selected. The end result is a realistic, procedural deformation. Examples of the deformed class are shown in Figure 2(a) ###reference_sf1###. These 6000 images were used to create the synthetic dataset.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### The synthetic dataset was modified prior to training to facilitate model generalization. The background was initially rendered to an RGB value of (0, 255, 0) to create a green screen. Morphological operations such as erosion, opening and closing were used to create a high quality mask for background transfer. These operations are essential to prevent any holes that may be accidentally created in the mask and ensure no green pixels are left over which could affect training. The background images for the synthetic dataset were randomly selected from the BG-20k dataset[22 ###reference_b22###]. This dataset was chosen as it was created for image matting tasks where foreground features are recognized from a background image. This is ideal for the use case of a generalized background with the important can features in the foreground. Examples of these synthetic images are shown in Figure 2(c) ###reference_sf3### and Figure 2(d) ###reference_sf4### for the deformed and non-deformed classes.\nA physical dataset was created as well in order to provide a real-world object to test our synthetic data against and measure the sim-to-real performance gap. Like the synthetic dataset, 4 camera views of each deformed object were used. To create the 160 images making up the deformed class, 40 deformed cans were used. Three cans were used for the non-deformed class with five images from each camera view quadrant. These 60 images are used for the non-deformed class for the physical dataset. The images were taken with a smart phone camera in a 1:1 aspect ratio and in positions and angles similar to those set up in the synthetic dataset. Examples for the real-world data are shown in Figure 2(e) ###reference_sf5### and Figure 2(f) ###reference_sf6###."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Network and Training",
|
| 27 |
+
"text": "We use a modified version of pre-trained VGG-16 to analyze the sim-to-real crossing. The VGG-16 model architecture, proposed by Simonyan and Zisserman[23 ###reference_b23###], has been shown to be widely useful in various computer vision tasks such as image classification, object detection, and image segmentation. The simplicity of the architecture makes it easily adaptable and effective, making it a good network to use as a benchmark. The original VGG-16 model consists of 13 convolutional layers and 3 fully connected layers. The convolutional layers use 3-by-3 filters with a stride and padding of 1 to maintain the spatial dimensions of the input. Max pooling layers use a 2-by-2 filter with a stride of 2 and are used after every 2 or 3 convolution layers. These design characteristics reduce the spatial dimensions by half.\nThe VGG-16 network explained above, has been modified to adapt to our specific task. The network has the same convolution and max pooling layers, but the fully connected layers have been changed. After the convolution layers, an adaptive average pooling layer is used to fix the output size. This is followed by 3 fully connected layers with ReLU activation and a dropout layer. The 3 fully connected layers go to a one-hot output with a sigmoid activation function. The sigmoid activation function has been shown to be the optimal activation function in feed-forward binary classification networks [24 ###reference_b24###].\nWhile training, all convolution feature layers except the final layer were frozen in order to preserve learned features from previous training on ImageNet-1k. This pre-training allows the model to learn the general features of images, which can be transferred to the task of classifying the deformity state of cans. For both the black background and BG-20k backgrounds, the synthetic dataset was split into training and testing data and evaluated. For the sim-to-real training, the entire synthetic dataset was used for training and the real-world dataset was used for testing. The network was trained using the Adam optimizer with a learning rate of 0.001 and a batch size of 32, for 15 epochs."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Results",
|
| 33 |
+
"text": "The results for the network\u2019s training can be found in Table II ###reference_### and Figure 3 ###reference_###. Overall, VGG-16 was able to accurately detect deformation in both synthetic datasets but saw a significant decrease when validated on real-world data. This is expected, due to different deformations, camera angles, camera intrinsics, and lighting. Using BG-20k as a background saw a smaller sim-to-real gap than just a black background and therefore a significant improvement in generalization.\nThe black background dataset had the best synthetic accuracy as shown in Figure 3(a) ###reference_sf1### and had almost perfect sim-to-real accuracy. However, the results suffered significantly trying to cross the sim-to-real gap as shown in Figure 3(b) ###reference_sf2###.\nThe BG-20k background dataset had great synthetic accuracy but was slightly worse than the black background dataset as shown in Figure 3(c) ###reference_sf3###. The sim-to-real results still suffered an accuracy penalty but were much improved in comparison to the black background dataset as shown in Figure 3(d) ###reference_sf4###.\n###figure_8### ###figure_9### ###figure_10### ###figure_11### The PCA visualization shown in Figure 4 ###reference_### showcases the distributions of the datasets. Adding in the BG-20k background causes greater variation and more significant overlap than just the black background. This results in the network trained on synthetic BG-20k dataset to be better at predicting real-world deformations. The distribution of the black background is very tightly spaced with a significant difference to the physical dataset. The resulting performance increase from this more varied distribution is showcased in Table II ###reference_###.\n###figure_12###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "6",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "VI Discussion",
|
| 39 |
+
"text": "There are several implications due to the performance of this method. This provides a method for condition assessment and deformation detection for delicate objects, that require noninvasive methods, or where haptic sensing is unsuitable. Modern quality assurance is increasingly reliant on ML techniques which require extensive datasets. These datasets can be costly and time-consuming to create. Using the synthetic data created using this pipeline would reduce the cost and time required to create datasets.\nAs this method relies on the simulation quality, improvements to the simulation quality are expected to improve all sim-to-real metrics and real-world generalizability. Additional and more accurate shape keys could improve dataset quality due to more accurate simulation. Results are also expected to improve using depth cameras for the real-world dataset capture and compositing depth information in Blender. This is expected to improve results as well as allow for finer deformations to be detected.\nA future direction for this work could be using transfer learning or network improvements. Transfer learning would allow for the synthetic data to more accurately resemble the real-world data to adapt the synthetic dataset to a specific real-world scenario instead of generalization. Due to the camera views being stochastic, using generative adversarial networks (GANs) for unsupervised domain adaptation is a suitable future direction. However, these methods would require a limited physical dataset for pre-training the condition assessment network. More experimentation with domain generalization network architectures is also another avenue for future direction."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "7",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "VII Conclusion",
|
| 45 |
+
"text": "Quality assurance of objects requires expensive equipment and labor-intensive processes. In this work, we have proposed a novel pipeline for geometrical quality assurance that significantly reduces time and effort in industrial settings. The main contribution to the novelty of this pipeline is the use of Blender to facilitate data synthesis. Through expert knowledge, deformations can be applied to the virtual object negating the need for a physical dataset. Using the shape keys to apply various deformations in Blender, a large varietal dataset can be created for ML classification. The accuracy shown from both networks shows that the pipeline is effective at pre-training a network to detect deformations and cross the sim-to-real gap. This is particularly useful in cases where the object of quality control is difficult to create a dataset, e.g., the object is fragile, difficult to handle, expensive, and visual methods are required. The pipeline shown could be used to bring modern quality assurance to processes still reliant on human operators. The pipeline has shown good results and serves as an excellent solution or starting point for pre-training a condition assessment model for other objects. A significant strength of this method is that no real-world data is required to train the condition assessment model while having good performance metrics. Our ongoing research aims to extend the proposed pipeline to adapt the synthetic domain to a specific real-world domain using a generative adversarial network (GAN) as opposed to background generalization."
|
| 46 |
+
}
|
| 47 |
+
],
|
| 48 |
+
"appendix": [],
|
| 49 |
+
"tables": {
|
| 50 |
+
"1": {
|
| 51 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T1.11.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S2.T1.12.2\" style=\"font-size:90%;\">Camera position parameters in Blender</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.9\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.9.10.1\">\n<td class=\"ltx_td\" id=\"S2.T1.9.10.1.1\" rowspan=\"2\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"4\" id=\"S2.T1.9.10.1.2\">Camera</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.9.11.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.9.11.2.1\">No. 1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.9.11.2.2\">No. 2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.9.11.2.3\">No.3</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.9.11.2.4\">No.4</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.4\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S2.T1.5.5.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"4\" id=\"S2.T1.7.7.2\">\n for all cameras</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.8.8.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" colspan=\"4\" id=\"S2.T1.9.9.2\">\n for all cameras</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 52 |
+
"capture": "TABLE I: Camera position parameters in Blender"
|
| 53 |
+
},
|
| 54 |
+
"2": {
|
| 55 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.6.1.1\" style=\"font-size:90%;\">TABLE II</span>: </span><span class=\"ltx_text\" id=\"S5.T2.7.2\" style=\"font-size:90%;\">Performance metrics of VGG-16</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.4.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S5.T2.4.5.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.4.5.1.1.1\">Metrics</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" colspan=\"2\" id=\"S5.T2.4.5.1.2\">Synthetic dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"2\" id=\"S5.T2.4.5.1.3\">Sim-to-real</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.6.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.4.6.2.1\">Black</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.4.6.2.2\">BG-20k</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.4.6.2.3\">Black</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.4.6.2.4\">BG-20k</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.2\">0.998</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.3\">0.955</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.4\">0.450</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.5\">0.755</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2\">0.998</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.3\">0.955</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.4\">0.485</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.5\">0.550</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.2\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.3\">0.997</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.4\">0.950</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.5\">0.550</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S5.T2.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T2.4.4.2\">0.997</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.3\">0.917</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T2.4.4.4\">0.326</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T2.4.4.5\">0.550</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 56 |
+
"capture": "TABLE II: Performance metrics of VGG-16"
|
| 57 |
+
}
|
| 58 |
+
},
|
| 59 |
+
"image_paths": {
|
| 60 |
+
"1": {
|
| 61 |
+
"figure_path": "2405.14877v2_figure_1.png",
|
| 62 |
+
"caption": "Figure 1: Proposed simulation-based deformation inspection pipeline.",
|
| 63 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/Pipeline_2.png"
|
| 64 |
+
},
|
| 65 |
+
"2(a)": {
|
| 66 |
+
"figure_path": "2405.14877v2_figure_2(a).png",
|
| 67 |
+
"caption": "(a) Synthetic deformed can black background\nFigure 2: Synthetic and real can dataset examples",
|
| 68 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/combine_images_4.jpg"
|
| 69 |
+
},
|
| 70 |
+
"2(b)": {
|
| 71 |
+
"figure_path": "2405.14877v2_figure_2(b).png",
|
| 72 |
+
"caption": "(b) Synthetic non-deformed can black background\nFigure 2: Synthetic and real can dataset examples",
|
| 73 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/combine_images_5.jpg"
|
| 74 |
+
},
|
| 75 |
+
"2(c)": {
|
| 76 |
+
"figure_path": "2405.14877v2_figure_2(c).png",
|
| 77 |
+
"caption": "(c) Synthetic deformed can BG-20k background\nFigure 2: Synthetic and real can dataset examples",
|
| 78 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/combine_images_6.jpg"
|
| 79 |
+
},
|
| 80 |
+
"2(d)": {
|
| 81 |
+
"figure_path": "2405.14877v2_figure_2(d).png",
|
| 82 |
+
"caption": "(d) Synthetic non-deformed can BG-20k background\nFigure 2: Synthetic and real can dataset examples",
|
| 83 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/combine_images_7.jpg"
|
| 84 |
+
},
|
| 85 |
+
"2(e)": {
|
| 86 |
+
"figure_path": "2405.14877v2_figure_2(e).png",
|
| 87 |
+
"caption": "(e) Real deformed can\nFigure 2: Synthetic and real can dataset examples",
|
| 88 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/combine_images_2.jpg"
|
| 89 |
+
},
|
| 90 |
+
"2(f)": {
|
| 91 |
+
"figure_path": "2405.14877v2_figure_2(f).png",
|
| 92 |
+
"caption": "(f) Real non-deformed can\nFigure 2: Synthetic and real can dataset examples",
|
| 93 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/combine_images_3.jpg"
|
| 94 |
+
},
|
| 95 |
+
"3(a)": {
|
| 96 |
+
"figure_path": "2405.14877v2_figure_3(a).png",
|
| 97 |
+
"caption": "(a) Black background - synthetic dataset\nFigure 3: Confusion matrices for the network performance",
|
| 98 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/VGG-black-synth-only.png"
|
| 99 |
+
},
|
| 100 |
+
"3(b)": {
|
| 101 |
+
"figure_path": "2405.14877v2_figure_3(b).png",
|
| 102 |
+
"caption": "(b) Black background - sim-to-real\nFigure 3: Confusion matrices for the network performance",
|
| 103 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/VGG-black.png"
|
| 104 |
+
},
|
| 105 |
+
"3(c)": {
|
| 106 |
+
"figure_path": "2405.14877v2_figure_3(c).png",
|
| 107 |
+
"caption": "(c) BG-20k background - synthetic dataset\nFigure 3: Confusion matrices for the network performance",
|
| 108 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/VGG-rand-synth-only.png"
|
| 109 |
+
},
|
| 110 |
+
"3(d)": {
|
| 111 |
+
"figure_path": "2405.14877v2_figure_3(d).png",
|
| 112 |
+
"caption": "(d) BG-20k background - sim-to-real\nFigure 3: Confusion matrices for the network performance",
|
| 113 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/VGG-rand.png"
|
| 114 |
+
},
|
| 115 |
+
"4": {
|
| 116 |
+
"figure_path": "2405.14877v2_figure_4.png",
|
| 117 |
+
"caption": "Figure 4: PCA visualization of all datasets",
|
| 118 |
+
"url": "http://arxiv.org/html/2405.14877v2/extracted/6063268/figures/PCA.png"
|
| 119 |
+
}
|
| 120 |
+
},
|
| 121 |
+
"validation": true,
|
| 122 |
+
"references": [],
|
| 123 |
+
"url": "http://arxiv.org/html/2405.14877v2"
|
| 124 |
+
}
|
20241217/2405.17812v2.json
ADDED
|
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Lyndon pairs and the lexicographically greatest perfect necklace",
|
| 3 |
+
"abstract": "Fix a finite alphabet. A necklace is a circular word. For positive integers and , a necklace is -perfect if all words of length occur times but at positions with different congruence modulo , for any convention of the starting position. We define the notion of a Lyndon pair and we use it to construct the lexicographically greatest -perfect necklace, for any and such that divides or divides .\nOur construction generalizes Fredricksen and Maiorana\u2019s construction of the lexicographically greatest de Bruijn sequence of order , based on the concatenation of the Lyndon words whose length divide .",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "1. Introduction",
|
| 9 |
+
"text": "Let be a finite alphabet with at least two symbols.\nA word on is a finite sequence of symbols, and a necklace is the equivalence class of a word under rotations.\nGiven two positive integers, and , a necklace is -perfect if all words of length \noccur times but at positions with\ndifferent congruence modulo , for any convention of the starting position.\nThe well known circular de Bruijn sequences of order , see [3 ###reference_b3###, 7 ###reference_b7###, 8 ###reference_b8###],\nare exactly the -perfect necklaces for .\nFor example, is a -perfect for .\nThe -perfect necklaces correspond to Hamiltonian cycles in the tensor product of the de Bruijn graph with a simple cycle of length .\nA thorough presentation of perfect necklaces appears in [1 ###reference_b1###]. With the purpose of constructing normal numbers with very fast convergence to normality M. Levin in [9 ###reference_b9###] gives two constructions of perfect necklaces.\nOne based on arithmetic progressions with difference coprime with the alphabet size which yields -perfect necklaces. The other based on Pascal triangle matrix which yields nested -perfect necklaces when is a power of .\nIn [2 ###reference_b2###] there is a method of constructing all nested -perfect necklaces for the alphabet .\nAssume the lexicographic order on words.\nA Lyndon word is a nonempty aperiodic word that is lexicographically greater than all of its rotations.\nFor example, the Lyndon words over alphabet \nsorted by length and then in decreasing lexicographical order within each length are\nLyndon words were introduced by Lyndon in the 1950s [10 ###reference_b10###, 11 ###reference_b11###].\nThey provide a nice factorization of the free monoid :\neach word in \nhas a unique decomposition as a product\n of\na non-increasing sequence of Lyndon words \nin the lexicographic order.\nThe problem to compute the prime factorization of a given word\nhas a\nsolution in time linear to the length of the given word [4 ###reference_b4###], see also [6 ###reference_b6###].\nFredricksen and Maiorana [5 ###reference_b5###] construct a de Bruijn sequence of order by concatenating all the Lyndon words whose length divides .\nThey first identify each necklace with the word that represents the lexicographically maximal rotation. Order the necklaces according to lexicographical order of these words.\nFredericksen and Maiorana de Bruijn sequence of order is the concatenation, according to this order of the necklaces, of the respective Lyndon words having length divisible by .\nFor example, the binary words of length yield necklaces, and the words representing the lexicographically maximal rotations, in decreasing lexicographical order are:\nThe corresponding Lyndon words, in the above order are:\nThe length of each of them divides , because it is or or . Then, none of them is discarded, hence,\nFredericksen and Maiorana\u2019s de Bruijn sequence of order sequence is:\nThis construction, together with the efficient generation of Lyndon words, provides a method for constructing the lexicographically greatest de Bruijn necklace of each order \nin linear time and logarithmic space.\nIn this note we present the notion of Lyndon pairs and we use it to generalize Fredricksen and Maiorana\u2019s algorithm to construct the lexicographically greatest -perfect necklaces,\nfor any and such that divides or divides .\nOur presentation also\nelaborates the rather briefly presented ideas and proofs in [5 ###reference_b5###]."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "2. Lyndon pairs in lexicographical order",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "2.1. Lyndon pairs",
|
| 21 |
+
"text": "We assume a finite alphabet with cardinality , with .\nWithout loss of generality we assume .\nWe use lowercase letters possibly with subindices for alphabet symbols.\nWords are finite sequences of symbols that we write or with a capital letter .\nWe write to denote the word of length made just of s.\nand we write to denote the word made of copies of .\nThe concatenation of two words and is written .\nThe length of a word is denoted with .\nThe positions of a word are numbered from to .\nWe use to denote the decreasing lexicographic order on words and we write\n when or .\nWe use lowercase letters to denote non-negative integers.\nWe write to say that divides .\nWe write for the set of residues modulo .\nWe also use and for the natural orders on and and, as usual, when or ; and when or .\nWhen we may write .\nWe work with pairs in \nwhen or .\nThis condition on and is assumed all along the sequel.\nWe refer to pairs with calligraphic letter .\nIf in we write to denote in .\nWe consider the following order over .\nThe smallest the second component in , the -greater the pair. Among pairs with the same second component in , the order is defined with the decreasing lexicographic order on .\nThus, is the -greatest\nin \nand is the -least.\nAs usual, we write\n exactly when or .\nGiven a pair in \nits right rotation is the pair\n and its left rotation is the pair .\nFor , \nand ,\nits right rotation is , and\nits left rotation\nis .\nThe rotation function induces a relation between pairs:\ntwo pairs are related if successive rotations initially applied to the first yield the second.\nThis relation is clearly reflexive and transitive.\nFor pairs in ,\nwhen or ,\nthe rotation has an inverse, given by successive rotations.\nSo the relation is also symmetric, hence, an equivalence relation.\nA necklace in \nis a set of pairs that are equivalent under rotations.\nFor each , the following set is a necklace\nIn each necklace we are interested in the pair that is maximal in the order .\nIf a pair in is -maximal among its rotations we call it\nmaximal.\nFor example, for and \nthe pair is maximal because it is -greater than its rotations\n,\n and\n.\nThe pair is maximal because it is -greater than its rotation\n.\nWhen \nthe maximal pairs in\n\nare the pairs for .\nWe can concatenate pairs in\n\nhaving the same second component:\nthe concatenation of and is .\nGiven we write to denote the pair .\nAssume or .\nThen, is maximal exactly when,\nfor any ,\n\nis maximal in .\n().\nIf then all pairs in with second component are maximal .\nWe prove it for , .\nAssume such that is maximal , but is not maximal.\nThen has a rotation with such that\n\nand .\nThen either,\nBut this implies that either\nThis, in turn, implies , which contradicts that is maximal.\n.\nAssume is maximal but is not.\nThen, since is maximal and .\nSince is not maximal it has a rotation such that .\nThen, necessarily,\nGiven that the second component of both and is , necessarily . But this contradicts that was a maximal rotation.\n\u220e\nIf and is a maximal pair then none of its rotations are -greater than , but there may be a rotation that is equal to .\nFor a word we write to denote its prefix of length , that is, .\nLet and be positive integers such that or .\nLet\n\nin be maximal and\ndifferent from .\nSuppose , let be such that and let\nwhere\nif then ;\nand if then is the smallest such that and .\nThen, is maximal.\nLet be a maximal pair .\nBy way of contradiction, assume\n= is not a maximal pair.\nThen there is some multiple of such that :\n\nNecessarily\nSince is a maximal pair,\nTherefore, , , , . This implies, .\nConsequently,\nbut this contradicts that is a maximal pair.\n\u220e\nFor example, let , , and . Since is maximal and the symbol in position is , letting (a multiple of grater than or equal to ) we obtain\nin the maximal pair .\nNo filling with was required.\nSince the symbol in position is , letting we obtain the maximal pair which is obtained by concatenating times.\nLet and be positive integers such that .\nThe reduction of a word is the word\n where\n and\n is the smallest such that\n, .\nThe reduction of a pair in ,\nis the pair\n in .\nWhen , the reduction always exists because one can take .\nNotice that all the rotations of a reduced pair are pairwise different.\nFor example, for , and ,\n;\n;\n.\nLet and be positive integers such that .\nThe expansion of a word is the word\n.\nThe expansion of a pair\n in is the pair \nin\n.\nWhen the expansion always exists.\nFor example for , and ,\n.\nNotice that when then .\nIf then for every pair , is a prefix of .\nLet and be positive integers.\nWhen , the Lyndon pairs are the reductions of the maximal pairs in .\nWhen with , the Lyndon pairs are the expansions of the maximal pairs .\nThus, when , the Lyndon pairs are elements in .\nBut when the Lyndon pairs are elements in .\nEvery Lyndon pair is\nstrictly -greater\nthan each of its rotations."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "2.2. The operator",
|
| 27 |
+
"text": "We define the operator that given a pair in \ndifferent from but\nwith second component ,\nit defines another pair in \nwith second component .\nLet and be positive integers such that or .\nFor in \nsuch that ,\nwe define the operator ,\nwhere,\nif then and . So, ;\nif and \nthen is the smallest integer such that and ,\nand is the greatest integer such that . Thus, in either case,\n.\nThe operator is applicable on any pair with second component , except for\n.\nFor example,\nfor , and ,\n;\n;\n.\nLet be the integer such that .\nThe list \nis strictly decreasing in .\nLet\u2019s see that for every pair ,\n.\nUsing the definition\nof ,\nSince both pairs have second component , there is some such that\nWe conclude that is strictly decreasing in .\n\u220e\nWhen the operator yields a bijection between maximal pairs.\nEvery pair of the form in is maximal because, when divides , there is just this unique rotation with second component .\nThe definition of ensures that goes through all the pairs of the form \nin -decreasing order.\nThe operator can be used forward for every pair except for , and it can be used backwards for every pair except for . Thus, except for the extremes, we can obtain the successor and the predecessor of a maximal pair in the order .\n\u220e\nWhen and the operator \nis not\ninjective nor surjective over pairs with second component .\nFor example, for , and , we see is not injective because\n\nand also\n.\nTo see that is not surjective observe that the pair is not in the image of ,\nbecause there is no pair such that .\nIt is possible to construct the reverse of list .\nIn case the operator defines a bijection between maximal pairs.\nIn case and ,\n is not injective,\nthere are pairs that have more than one preimage by .\nHowever, except for every element has one predecessor\nin the list ,\nwhich is just one of the possible preimages by .\nLet and positive integers such that .\nEvery element in , except\n,\nhas a predecessor which is\nthe preimage of by given by\nwhere\n, and\n is the smallest multiple of \nsuch that and .\nFirst notice that this factorization always exists\nIf the , and .\nLet \nbe the pair obtained by undoing the transformation done by the operator, knowing that and are in ,\nwhere satisfies , and and .\nThus,\nThe word is determined by\n and .\nThen,\n\u220e\nIf , with , then\n is not maximal .\nLet , , \nand indices such that\nAssume and, by way of contradiction suppose is a maximal pair.\nSince ,\nand it should be\n.\nSince\n\nthen ; hence, .\nThen,\nwe have\n.\nAnd for\n we have\n.\nThis is because and ,\nthen\n,\ngiven that is the lexicographically greatest symbol.\nWe now show that the other symbols in and also coincide.\nSince is a maximal pair, and we know that .\nSince we have , hence .\nRepeating this argument we obtain for\n and ,\n, for ,\nand\nfor .\n\nThis would imply\n.\nThus, there is no maximal pair such that .\n\u220e"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "3. Statement of Theorem 1",
|
| 33 |
+
"text": "Let and be positive integers such that or .\nDefine by removing from\n\nthe pairs that are not maximal.\nIn case all the elements are maximal, none is removed.\nIn case with , for each pair in , except ,\nremove\n, \nwhere\n is the least such that\n is maximal .\nLet be the number of elements of the list .\nLet and be positive integers such that or .\nWe define the list as the list of Lyndon pairs of the elements in the list .\nSince has elements, has elements as well.\nRecall that a necklace is a circular word and a necklace is -perfect if all words of length occur times but at positions with\ndifferent congruence modulo , for any convention of the starting position.\nLet and be positive integers such that or .\nLet be the concatenation of all the words in the Lyndon pairs\nin the order given by .\nThen, is the lexicographically greatest\n-perfect necklace.\nHere is an example for , and .\nThe lexicographically greatest -perfect necklace is obtained by concatenating the following words (the symbol is for ease of reading):\n."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "4. Proof of Theorem 1",
|
| 39 |
+
"text": "We divide the proof of Theorem 1 ###reference_orem1### in three parts:\nin Section 4.1 ###reference_### we show that length of is ,\nin Section 4.2 ###reference_### we prove that is -perfect, and\nin Section 4.3 ###reference_### we show that is the lexicographically greatest -perfect necklace.\n\nWe start with two lemmas about the list of maximal pairs.\nThe list \nstarts with the -greatest pair , ends with the -smallest pair , and contains all maximal pairs in strictly decreasing -order.\nCase .\nBy definition, the list starts with .\nThe maximal pairs are in -decreasing order:\nThis is because the list is constructed by successive applications of the operator and by Lemma 2 ###reference_ma2### the list is strictly -decreasing.\nNo maximal pair is missing:\nSuppose is in and let be the least such that is maximal . To argue by contradiction, suppose there is\na maximal pair such that .\nThen, there is , , such that\nThus, occurs in between some and .\nBy Lemma 4 ###reference_ma4### this is impossible.\nThe list ends with :\nThere is no such that ,\nand the list is strictly -decreasing.\nBy Lemma 2 ###reference_ma2###, the operator\n applies to any pair except .\nCase , . It is the same proof as in the previous case, but simpler because yields exactly all the maximal pairs in -decreasing order.\n\u220e\nLet , and .\nThe list of maximal pairs is:\n.\nThe next is the key lemma.\nAssume . If is followed by in the list of maximal pairs then is a prefix of .\nWe can write \nas\nwhere and .\nIf then and\n.\nOtherwise, and, since\n for the smallest such that it is maximal, the shape of is\nfor some word and for the smallest such that and .\nSince and\n starts with \nwe have\n, hence . Then,\nIn both cases is prefix of .\n\u220e"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "4.1. Proof that necklace has length",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1.1",
|
| 49 |
+
"parent_section_id": "4.1",
|
| 50 |
+
"section_name": "4.1.1. When",
|
| 51 |
+
"text": "The necklace is the concatenation of all the words\nof the reduced maximal pairs exactly once.\nThus, the length of is\nEach is an element of , which consists of the reductions of the pairs in the list , which by Lemma 5 ###reference_ma5### contains all the maximal pairs.\nThe length of is the length of the word , which is the number of different rotations of , see Observation 5 ###reference_ervation5###.\nIf ,\nthen has different rotations:\n, ,\u2026, .\nSo, each of the positions in is the start of a different rotation of .\nBy Lemma 6 ###reference_ma6###\nif the successor of in is\n then in we have ,\nand is a prefix of .\nLet . We argue that each\nposition , for , is the start\nof one of the different pairs in .\nWe conclude that has exactly one position for each of the different pairs in . Thus, the length of is ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1.2",
|
| 55 |
+
"parent_section_id": "4.1",
|
| 56 |
+
"section_name": "4.1.2. When ,",
|
| 57 |
+
"text": "The necklace is the concatenation of all the words of the expanded maximal pairs.\nThus, length of is\nThe length of a Lyndon pair is the length of the expansion of a maximal pair in , which is exactly .\nThere are many Lyndon pairs because, given that , every in is maximal.\nSince is the concatenation of all the Lyndon pairs,\nthe length of is ."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "4.2. Proof that necklace is -perfect",
|
| 63 |
+
"text": "We need to show that each word of length occurs exactly times, at positions with different congruence modulo .\nIn this proof we number the positions of starting at ; this is convenient for the presentation because the positions with congruence are multiple of .\nWe say that we find a pair in \nwhen there is a position \nsuch that ,\n,\nand\n.\nTo prove that is a -perfect necklace we need to find\nall the rotations of all the maximal pairs in in the necklace .\nSince ,\n=, application of on yields the maximal pair\nSince and are consecutive Lyndon pairs in , the construction of puts followed by ,\nThis yields all rotations of : , \u2026, .\nSince , .\nConsider the maximal pairs for =,\nBy Lemma 1 ###reference_ma1###, for each of these the next maximal pair in is\nSince ,\nit is clear that is maximal because all the rotations of that have second component are identical to .\nSince and are consecutive Lyndon pairs in , the construction of puts followed by .\nNotice that for each ,\ngives rise to the pair .\nThus, we have all the rotations , which are , , .\nEvery maximal pair different from has the form\nwhere is with the smallest integer such that and .\nSubcase . The pair , different from , always has a successor maximal pair in the list :\nwhere is the largest with , is the smallest multiple of with . Notice that .\nSince ,\nand it also yields the first left rotations of , which are\n, with .\nIt remains to identify in the constructed necklace the right rotations of\n.\nThese are of the form\nfor .\nBy Lemma 3 ###reference_ma3### we can consider\nthe predecessor of by ,\nwhere\nTo see that is a maximal pair consider first which, by Observation 2 ###reference_ervation2###, is maximal.\nWe now argue that\n\nhas no proper suffix\n that coincides with\n.\nIf there were such a prefix we could construct the rotation of given by the pair \nand one of the following would be true:\n: but this is impossible by Observation 5 ###reference_ervation5###.\n: Since\n,\n and we assumed\n,\nnecessarily\n. But this contradicts that is a maximal pair.\n:\nThere is a rotation of which is -greater than , contradicting that is a maximal pair.\nWe conclude that all suffixes\n of are lexicographically smaller than\n.\nTherefore,\nall suffixes of of \nare lexicographically smaller than or equal to .\nWe already argued that is indeed maximal.\nFor any rotation of \nof the form ,\nfor , we argued that\n is lexicographically smaller than or equal to .\nMoreover, is lexicographically smaller than\n.\nFor any rotation of that starts with a suffix of it can not be maximal,\nbecause for any ,\n, otherwise would not be maximal.\nWe have the maximal successive pairs\n, and .\nFrom the arguments above, and .\nSince and we assumed ,\n is not equal to .\nThen,\n.\nObserve that the last symbols of followed by , followed by the first symbols of give rise to\nwhere is because\n is a multiple of .\nWe now identify in this pair the rotations of to the right.\nSince where\n,\nafter rotations to the right of , with ,\nwe obtain\nSubcase .\nWe need to see that for the maximal pairs\n\nwhere is reduced, that is ,\nwe can find all rotations of in the constructed .\nIf \nthen , which is the last maximal pair in the list and .\nThen,\nand we can find left rotations of which are of the form\nNow assume . Let\u2019s write\nwith such that ,\n and . Then is of the form\nwith is the least multiple of with .\nSince , and by Lemma 6 ###reference_ma6### is a prefix of , then is a prefix of , and we can find the first left rotations of in it, which are of the form\nIt remains to find right rotations of , which are of the form\nfor .\nEquivalently we can write it as\nfor .\nIf and then the pair is\n.\nWe find it in the pair\n,\nwhich results\nfrom the concatenation of the words in the last two Lyndon pairs in\n and the first two, that is\n and , followed by\n and .\nIf or then we find the right rotations of in the concatenation of the words of three Lyndon pairs, that we call and we define below.\nRecall that , with such that , is a maximal pair.\nWe claim there is a unique pair of the form\nwhere is not zero.\nNotice that may not be maximal.\nSince is a pair -greater than ,\nwe know is a predecessor of by .\nThe pair is the closest predecessor of by that is a maximal pair.\nFor this, consider the successive application of the operator ,\nwhich allows us to traverse the list .\nOur interest is to traverse it backwards.\nHere is a diagram:\nFor let be the pair such that\n.\nFor ,\nwhere is the least multiple of such that ,\n,\n and\n.\nIf is a maximal pair then fix and we have finished the search. Otherwise we consider the predecessor of by ,\nsuch that\nwhere is the least multiple of , , and such that .\nWe know that\n exists because\n,\notherwise would be a maximal pair, and then there is for some .\nNotice that .\nIf is a maximal pair then we fix and we have finished the search. Otherwise, we repeat\nthis procedure. In this way the predecessor of\n by is\nsuch that\nwhere is multiple of , , and such that\nEventually we find such that is a maximal pair.\nConsider now the three consecutive maximal pairs in the list ,\nLet and be the corresponding Lyndon pairs. always exists, because it\u2019s either or a maximal pair before it, and is in the worst case.\nNotice that ends with \nand, by Lemma 6 ###reference_ma6###,\n\nstarts with . Therefore, contains\n, for some .\nSince is a maximal pair -greater than or equal to , we can assert that its prefix is .\nFinally, we have\nbecause\n\nand\n\nbecause was the\nposition of a symbol in , which has length .\nThen,\nWe conclude that\n contains\nThis can be rewritten as the pair\nbecause is a multiple of , hence has second component .\nWe can find the first left rotations of inside\n, which are of the form , with .\nConsider the maximal pairs for ,\nFor each of these ,\nSince and are consecutive Lyndon pairs in , the construction of puts followed by .\nNotice that for each ,\ngives rise to the pair .\nThus, we have all the rotations of the pair , which are , , .\nLet , and let be the largest such that . Fix .\nSince the pair is different from , it has necessarily a maximal pair successor in the list ,\nSince ,\nand it also yields the first left rotations of , which are\n, for .\nNote that all of the rotations for are contained inside these left rotations, as , and then .\nIf then the rotations of are considered above by taking and .\nIn case we have considered just rotations\nwhich are\nfor .\nIf and then the pair is\n.\nWe find it in the pair\n,\nwhich results\nfrom the concatenation of the last and first Lyndon pairs in\n, which are\n and respectively.\nIf or then\nthe pairs are for , and we find them in\nthe concatenation of the words of two Lyndon pairs, that we call and we define below.\nLet be the largest such that and , and let .\nBy Observation 1 ###reference_ervation1###,\n is a Lyndon pair, different from . Notice that is well defined because\nthere is some , with such that .\nTo see this notice that\neither\n but , hence, there exists ; or and then .\nLet be the successor of in , which is of the form\nThe construction concatenates , resulting in\nFinally, if we take the suffix followed by the prefix it results in the pair .\nGiven that we can do this for every possible , we have obtained the wanted right rotations. Consequently, we have found all the rotations of ."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2.1",
|
| 67 |
+
"parent_section_id": "4.2",
|
| 68 |
+
"section_name": "4.2.1. When",
|
| 69 |
+
"text": "Each maximal pair has exactly many rotations.\nSince ,\n=, application of on yields the maximal pair\nSince and are consecutive Lyndon pairs in , the construction of puts followed by ,\nThis yields all rotations of : , \u2026, .\nSince , .\nConsider the maximal pairs for =,\nBy Lemma 1 ###reference_ma1### ###reference_ma1###, for each of these the next maximal pair in is\nSince ,\nit is clear that is maximal because all the rotations of that have second component are identical to .\nSince and are consecutive Lyndon pairs in , the construction of puts followed by .\nNotice that for each ,\ngives rise to the pair .\nThus, we have all the rotations , which are , , .\nEvery maximal pair different from has the form\nwhere is with the smallest integer such that and .\nSubcase . The pair , different from , always has a successor maximal pair in the list :\nwhere is the largest with , is the smallest multiple of with . Notice that .\nSince ,\nand it also yields the first left rotations of , which are\n, with .\nIt remains to identify in the constructed necklace the right rotations of\n.\nThese are of the form\nfor .\nBy Lemma 3 ###reference_ma3### ###reference_ma3### we can consider\nthe predecessor of by ,\nwhere\nTo see that is a maximal pair consider first which, by Observation 2 ###reference_ervation2### ###reference_ervation2###, is maximal.\nWe now argue that\n\nhas no proper suffix\n that coincides with\n.\nIf there were such a prefix we could construct the rotation of given by the pair \nand one of the following would be true:\n: but this is impossible by Observation 5 ###reference_ervation5### ###reference_ervation5###.\n: Since\n,\n and we assumed\n,\nnecessarily\n. But this contradicts that is a maximal pair.\n:\nThere is a rotation of which is -greater than , contradicting that is a maximal pair.\nWe conclude that all suffixes\n of are lexicographically smaller than\n.\nTherefore,\nall suffixes of of \nare lexicographically smaller than or equal to .\nWe already argued that is indeed maximal.\nFor any rotation of \nof the form ,\nfor , we argued that\n is lexicographically smaller than or equal to .\nMoreover, is lexicographically smaller than\n.\nFor any rotation of that starts with a suffix of it can not be maximal,\nbecause for any ,\n, otherwise would not be maximal.\nWe have the maximal successive pairs\n, and .\nFrom the arguments above, and .\nSince and we assumed ,\n is not equal to .\nThen,\n.\nObserve that the last symbols of followed by , followed by the first symbols of give rise to\nwhere is because\n is a multiple of .\nWe now identify in this pair the rotations of to the right.\nSince where\n,\nafter rotations to the right of , with ,\nwe obtain\nSubcase .\nWe need to see that for the maximal pairs\n\nwhere is reduced, that is ,\nwe can find all rotations of in the constructed .\nIf \nthen , which is the last maximal pair in the list and .\nThen,\nand we can find left rotations of which are of the form\nNow assume . Let\u2019s write\nwith such that ,\n and . Then is of the form\nwith is the least multiple of with .\nSince , and by Lemma 6 ###reference_ma6### ###reference_ma6### is a prefix of , then is a prefix of , and we can find the first left rotations of in it, which are of the form\nIt remains to find right rotations of , which are of the form\nfor .\nEquivalently we can write it as\nfor .\nIf and then the pair is\n.\nWe find it in the pair\n,\nwhich results\nfrom the concatenation of the words in the last two Lyndon pairs in\n and the first two, that is\n and , followed by\n and .\nIf or then we find the right rotations of in the concatenation of the words of three Lyndon pairs, that we call and we define below.\nRecall that , with such that , is a maximal pair.\nWe claim there is a unique pair of the form\nwhere is not zero.\nNotice that may not be maximal.\nSince is a pair -greater than ,\nwe know is a predecessor of by .\nThe pair is the closest predecessor of by that is a maximal pair.\nFor this, consider the successive application of the operator ,\nwhich allows us to traverse the list .\nOur interest is to traverse it backwards.\nHere is a diagram:\nFor let be the pair such that\n.\nFor ,\nwhere is the least multiple of such that ,\n,\n and\n.\nIf is a maximal pair then fix and we have finished the search. Otherwise we consider the predecessor of by ,\nsuch that\nwhere is the least multiple of , , and such that .\nWe know that\n exists because\n,\notherwise would be a maximal pair, and then there is for some .\nNotice that .\nIf is a maximal pair then we fix and we have finished the search. Otherwise, we repeat\nthis procedure. In this way the predecessor of\n by is\nsuch that\nwhere is multiple of , , and such that\nEventually we find such that is a maximal pair.\nConsider now the three consecutive maximal pairs in the list ,\nLet and be the corresponding Lyndon pairs. always exists, because it\u2019s either or a maximal pair before it, and is in the worst case.\nNotice that ends with \nand, by Lemma 6 ###reference_ma6### ###reference_ma6###,\n\nstarts with . Therefore, contains\n, for some .\nSince is a maximal pair -greater than or equal to , we can assert that its prefix is .\nFinally, we have\nbecause\n\nand\n\nbecause was the\nposition of a symbol in , which has length .\nThen,\nWe conclude that\n contains\nThis can be rewritten as the pair\nbecause is a multiple of , hence has second component ."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2.2",
|
| 73 |
+
"parent_section_id": "4.2",
|
| 74 |
+
"section_name": "4.2.2. When ,",
|
| 75 |
+
"text": "This proof is similar to the previous one, but now we need to use the corresponding definitions of , the list and the notion of expansion to define the list . The proof becomes simpler because it requires fewer cases, and each of these cases is simpler too.\nThe key observation for this proof is that each word of length , repeated times, determines a Lyndon pair\n, which has exactly many different rotations.\nWe can find the first left rotations of inside\n, which are of the form , with .\nConsider the maximal pairs for ,\nFor each of these ,\nSince and are consecutive Lyndon pairs in , the construction of puts followed by .\nNotice that for each ,\ngives rise to the pair .\nThus, we have all the rotations of the pair , which are , , .\nLet , and let be the largest such that . Fix .\nSince the pair is different from , it has necessarily a maximal pair successor in the list ,\nSince ,\nand it also yields the first left rotations of , which are\n, for .\nNote that all of the rotations for are contained inside these left rotations, as , and then .\nIf then the rotations of are considered above by taking and .\nIn case we have considered just rotations\nwhich are\nfor .\nIf and then the pair is\n.\nWe find it in the pair\n,\nwhich results\nfrom the concatenation of the last and first Lyndon pairs in\n, which are\n and respectively.\nIf or then\nthe pairs are for , and we find them in\nthe concatenation of the words of two Lyndon pairs, that we call and we define below.\nLet be the largest such that and , and let .\nBy Observation 1 ###reference_ervation1### ###reference_ervation1###,\n is a Lyndon pair, different from . Notice that is well defined because\nthere is some , with such that .\nTo see this notice that\neither\n but , hence, there exists ; or and then .\nLet be the successor of in , which is of the form\nThe construction concatenates , resulting in\nFinally, if we take the suffix followed by the prefix it results in the pair .\nGiven that we can do this for every possible , we have obtained the wanted right rotations. Consequently, we have found all the rotations of ."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.3",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "4.3. Proof that necklace is the lexicographically greatest -perfect necklace",
|
| 81 |
+
"text": "The necklace is the concatenation of all the words in the\nLyndon pairs in the list .\nLet and let \nsuch that .\nWe already showed , hence, .\nThe necklace\n,\nwhere\nfor , .\nSince each is a Lyndon word, is lexicographically greater than all of its rotations.\nNow, instead of looking at an index from to ,\nlet\u2019s consider an index running through all the positions of , from to .\nConsider the set of pairs\nSince we already proved that is -perfect, the set \nis exactly .\nThe construction guarantees that\neach position is the start of one of the different pairs in .\nAs a consequence of the order of the elements in the list ,\nwhen for some , , the pair\n\nis the -greatest among the remaining pairs with second component .\nIf and , each is a symbol in different pairs in , of them with second component .\nFor example,\n is in pairs,\nIf and , each is a symbol in different pairs in .\nThere is exactly one of these with second component and it has the form , with .\nFor example,\n is in pairs,\nImplicitly our construction defines a function , such that for each ,\nSuppose\n is another -perfect necklace and there is a position such that\n but .\nThen, there is a position congruent to modulo such that\n,\nand\nThen, the respective pairs starting at position satisfy,\nBut this contradicts that our construction puts in the pairs in in -decreasing ordering.\nThis completes the proof of Theorem 1 ###reference_orem1###."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [],
|
| 85 |
+
"tables": {},
|
| 86 |
+
"image_paths": {},
|
| 87 |
+
"validation": true,
|
| 88 |
+
"references": [
|
| 89 |
+
{
|
| 90 |
+
"1": {
|
| 91 |
+
"title": "Perfect necklaces.",
|
| 92 |
+
"author": "N. \u00c1lvarez, V. Becher, P. Ferrari, and S. Yuhjtman.",
|
| 93 |
+
"venue": "Advances in Applied Mathematics, 80:48 \u2013 61, 2016.",
|
| 94 |
+
"url": null
|
| 95 |
+
}
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"2": {
|
| 99 |
+
"title": "Normal numbers and perfect necklaces.",
|
| 100 |
+
"author": "V. Becher and O. Carton.",
|
| 101 |
+
"venue": "Journal of Complexity, 54(101403), 2019.",
|
| 102 |
+
"url": null
|
| 103 |
+
}
|
| 104 |
+
},
|
| 105 |
+
{
|
| 106 |
+
"3": {
|
| 107 |
+
"title": "A combinatorial problem.",
|
| 108 |
+
"author": "N. G. de Bruijn.",
|
| 109 |
+
"venue": "Koninklijke Nederlandse Akademie v.Wetenschappen, 49:758\u2013764,\n1946.",
|
| 110 |
+
"url": null
|
| 111 |
+
}
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"4": {
|
| 115 |
+
"title": "Mots de Lyndon et p\u00e9riodicit\u00e9.",
|
| 116 |
+
"author": "J.-P. Duval.",
|
| 117 |
+
"venue": "RAIRO Informatique Th\u00e9orique, 14(2):181\u2013191, 1980.",
|
| 118 |
+
"url": null
|
| 119 |
+
}
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"5": {
|
| 123 |
+
"title": "Necklaces of beads in colors and -ary de Bruijn sequences.",
|
| 124 |
+
"author": "H. Fredricksen and J. Maiorana.",
|
| 125 |
+
"venue": "Discrete Mathematics, 23(3):207\u2013210, 1978.",
|
| 126 |
+
"url": null
|
| 127 |
+
}
|
| 128 |
+
},
|
| 129 |
+
{
|
| 130 |
+
"6": {
|
| 131 |
+
"title": "The art of computer programming. Volume 3.",
|
| 132 |
+
"author": "D. Knuth.",
|
| 133 |
+
"venue": "Addison-Wesley Series in Computer Science and Information Processing.\nAddison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1973.",
|
| 134 |
+
"url": null
|
| 135 |
+
}
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"7": {
|
| 139 |
+
"title": "Normal periodic systems and their applications to the estimation of\nsums of fractional parts.",
|
| 140 |
+
"author": "N. Korobov.",
|
| 141 |
+
"venue": "Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya,\n15(1):17\u201346, 1951.",
|
| 142 |
+
"url": null
|
| 143 |
+
}
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"8": {
|
| 147 |
+
"title": "On normal periodic systems.",
|
| 148 |
+
"author": "N. Korobov.",
|
| 149 |
+
"venue": "Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya,\n16(3):211\u2013216, 1952.",
|
| 150 |
+
"url": null
|
| 151 |
+
}
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"9": {
|
| 155 |
+
"title": "On the discrepancy estimate of normal numbers.",
|
| 156 |
+
"author": "M. B. Levin.",
|
| 157 |
+
"venue": "Acta Arithmetica, 88(2):99\u2013111, 1999.",
|
| 158 |
+
"url": null
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"10": {
|
| 163 |
+
"title": "On Burnside\u2019s problem.",
|
| 164 |
+
"author": "R. C. Lyndon.",
|
| 165 |
+
"venue": "Transactions of the American Mathematical Society, 77:202\u2013215,\n1954.",
|
| 166 |
+
"url": null
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"11": {
|
| 171 |
+
"title": "On Burnside\u2019s problem. II.",
|
| 172 |
+
"author": "R. C. Lyndon.",
|
| 173 |
+
"venue": "Transactions of the American Mathematical Society, 78:329\u2013332,\n1955.",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
}
|
| 177 |
+
],
|
| 178 |
+
"url": "http://arxiv.org/html/2405.17812v2"
|
| 179 |
+
}
|
20241217/2406.04777v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2406.06342v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2406.08270v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2406.08689v3.json
ADDED
|
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Security of AI Agents",
|
| 3 |
+
"abstract": "AI agents have been boosted by large language models.\nAI agents can function as intelligent assistants and complete tasks on behalf of their users\nwith access to tools and the ability to execute commands in their environments.\nThrough studying and experiencing the workflow of typical AI agents,\nwe have raised several concerns regarding their security.\nThese potential vulnerabilities are not addressed by the frameworks used to build the agents,\nnor by research aimed at improving the agents.\nIn this paper, we identify and describe these vulnerabilities in detail from a system security perspective,\nemphasizing their causes and severe effects.\nFurthermore, we introduce defense mechanisms corresponding to each vulnerability with design and experiments to evaluate their viability.\nAltogether, this paper contextualizes the security issues in the current development of AI agents\nand delineates methods to make AI agents safer and more reliable.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "AI agents are robots in cyberspace, executing tasks on behalf of their users.\nTo understand their user\u2019s command,\nthey send the input prompts as requests to foundation models, such as large language models (LLMs).\nThe responses generated by the model may contain the actions to be executed or further instructions.\nTo execute the actions, the agent invokes tools,\nwhich may run local computations or send requests to remote hosts, such as querying search engines.\nThe tools output results and feedback to the LLM for the next round of actions.\nBy invoking tools, AI agents are granted the ability to interact with the real world.\nSince AI agents depend on their LLM to understand user input and environment feedback\nand generate actions to use tools, we say that the LLM is the backbone of the agent.\nWe summarize the basic architecture of LLM-based AI agents in Figure 1 ###reference_###.\nTraditional agents operate on pre-defined rules [wilkins2014practical] or\nreinforcement learning [isbell2001social],\nmaking them hard to generalize to new tasks and different tools.\nLLM-based AI agents, on the contrary,\ncan be practical in various tasks benefiting from enormous pre-training knowledge\nand the ability to read tool documentation as additional prompts.\nWe use the term AI agent to denote all LLM-based agents in this paper.\nOver the years, AI agents have showcased their outstanding performance on tasks including but not limited to\nwriting shell scripts to interact with operating systems, querying databases, shopping and browsing on the web, playing video games, and robots manipulation [yao2022webshop, liu2024agentbench, zhou2024webarena, park2023generative].\nDespite their popularity,\nexisting research and development of AI agents failed to take into account their potential vulnerabilities.\nIn traditional computing systems,\nsecurity is guarded by three properties:\nconfidentiality, integrity, and availability,\neach of these faces unique challenges.\n###figure_1### Confidentiality is often managed by model-based access control policies,\nwhich abstract the system components and users into subjects, objects, and rights [bishop2004introduction].\nHowever, these principles face significant challenges when applied to LLM-based systems\ndue to the nature of LLMs to memorize [carlini2023quantifying, tirumala2022memorization]\nand compress [deletang2024language] training data.\nAI agents are granted the ability to interact with tool applications\nby reading their instructions and feedback,\nleaving more possibilities for privacy leaks.\nThe ability to use tools introduces additional layers of complexity in maintaining confidentiality.\nAs a result, we have to rethink information confidentiality in the context of AI agents.\nWhen assisting users with automatic tool usage,\nrequests for sensitive information are unavoidable.\nThis evaluation is essential to address the unique challenges posed by AI agents,\nespecially when they are learning from user chat history and tool interaction logs,\nto ensure that data privacy protections evolve to effectively safeguard information in this new technological landscape.\nIntegrity is another important aspect of data security.\nWhen provided to the audience, the data should be complete and trustworthy.\nIn computing systems, data should not be modified by unauthorized users,\nno matter whether it is done intentionally or not.\nThe integrity of data in AI agent systems is also distinct from traditional systems.\nUsers and tools interact with the agent\u2019s LLM via prompts,\nwhere inputs from the user and tools will be in the same context window.\nTherefore, the integrity of different users\u2019 and tools\u2019 interactions is a new and unique challenge\nto AI agents.\nThe integrity of data also requires special attention when facing AI agents.\nSince AI agents will execute commands on the user\u2019s behalf despite not being the user themselves,\nthe integrity models for traditional systems are partially ignored.\nThe threat of availability should be re-investigated for AI agents as well.\nSystems, data, and applications should always be available when the users need them.\nUnlike LLMs, which are stateless in general and can only output text tokens,\nAI agents execute actions that could affect the computing system itself.\nTherefore, each of the agent\u2019s actions may have its own vulnerabilities to the agent\u2019s host machine and tools.\nCurrent study on AI agents evaluates them in benchmark settings [liu2024agentbench, zhou2024webarena, park2023generative],\nfailing to consider the difference between benchmark environments and real-world applications.\nAI agents without sanitization can harm the availability of both its host system and its tools\nby executing malicious commands generated by its LLM.\nTo clarify between these vulnerabilities and the security of LLMs,\nmalicious actions might be generated by hallucinations or prompts that do not break LLM\u2019s alignment,\nrequiring different defenses and safeguarding.\nIn this paper, we discuss the possible security issues of AI agents.\nTo facilitate future research, we propose several defense methodologies for\nthe vulnerabilities we discovered on the component level in the AI agent architecture.\nTo evaluate our defense proposals,\nwe also set up preliminary experiments that our solutions depend on.\nOur contributions are as follows:\n(1) We formally introduce potential vulnerabilities of AI agents,\nand explain the causes and effects of these vulnerabilities in detail.\n(2) We propose multiple defenses to close the gap between AI research and AI agents in practice.\n(3) We verify the applicability of our proposed defenses with empirical evidences and discuss their limitations and directions for improvement."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Threat model",
|
| 15 |
+
"text": "We assume the AI agent is text-only for input and output.\nWe assume that the server that runs the AI agent is secure.\nUsers can only access the server via the API provided by the AI agent.\nThe programs that the AI agent runs have no undefined behavior, such as buffer overflow that allows remote code execution.\nWe assume the AI agent has access to one or multiple tools,\nand will execute the tools solely based on the LLM-generated actions."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Potential vulnerabilities",
|
| 21 |
+
"text": "In this section, we identify the important potential vulnerabilities that an AI agent application faces."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A Sessions",
|
| 27 |
+
"text": "HTTP servers introduced the notion of sessions in order to guard the confidentiality and integrity\nof data exchanged between users and servers.\nSuch ideas can be applied to AI agents.\nAs a user interacts with the AI agent, they may issue many commands in the same session.\nThe commands in the session are correlated temporally, e.g., the context of a command may depend on its preceding ones.\nTherefore, when the AI agent is provided as a service to multiple users,\nthe AI agent needs to track the session of each user.\nDespite being standard for web applications, sessions are difficult for AI agents to manage.\nWhen the temperature of the model is set to zero,\nthe output of the model is close to deterministic, where the same prompt will be answered with very similar responses.\nTherefore, the state of the LLMs is tracked by the change in its questions by different prompting methods.\nIn CoALA [sumers2024cognitive], the state of an LLM is formulated as a production sequence\nwhere is the question query and is the answer from the LLM.\nIn simpler terms, we consider the language model to be \u201chonest,\u201d\nmeaning it always generates the same response when given the same question.\nTherefore, the AI agent is responsible for managing the state of its LLM.\nIf the AI agent has only one API account on the AI model,\nthen instructing the AI model to separate the sessions of different users raises concerns on\ninformation leakage and action mis-assignment.\nOn the other hand, even if the AI agent has multiple API accounts on the AI model,\nmapping user sessions to API accounts faces the same vulnerabilities when the number of concurrent users exceeds that of API accounts.\nIn addition to the integrity and confidentiality of chat history, the AI agent\u2019s backbone LLM also faces challenges in availability without proper session management.\nQuerying the LLM is computationally heavy and requires substantial graphic processing resources.\nIf the sessions of the AI agent are not managed properly,\nboth the agent and the backbone LLM are vulnerable to denial of service attacks (DoS)."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "III-B Model pollution and privacy leak",
|
| 33 |
+
"text": "The concern of model pollution and privacy leaks arises when the AI models are fine-tuned on user input.\nIt is already known that model service providers like OpenAI\n111https://help.openai.com/en/articles/8590148-memory-faq ###reference_8-memory-faq###\nare doing this to make their models more powerful.\nTo improve the capabilities of AI agents in making actions and assisting users,\nfine-tuning the underlying LLM with chat history is the most direct approach.\nTherefore, these concerns must be carefully addressed to secure AI agents.\n###figure_2### Model pollution, depicted in Figure 2 ###reference_###,\ncan occur when a user provides malicious inputs to an agent with the intention of negatively altering the model.\nModel pollution can compromise the integrity of AI agents.\nAdversarial data poisoning is a well-established attack technique against machine learning models, including LLMs [steinhardt2017certified, kurita2020weight, jiang2023forcing].\nIn the context of LLM-based AI agents,\nthis vulnerability is particularly pronounced due to the differences between adversarial prompts and pollution prompts.\nIndividually, some prompts may not appear adversarial, making them challenging to detect with prompt sanitizers.\nHowever, if the contents of these prompts are concatenated together, the resulting text as training data might pollute the models.\nFurthermore, data pollution may also happen unintentionally,\nas users naturally engage with AI agents. Natural actions with one application in the chat history may also be harmful when applied to other applications.\nThis incidental introduction of skewed chat history as training data can subtly shift the model\u2019s action generation,\nleading to harmful consequences.\n###figure_3### Privacy leaks as illustrated in Figure 3 ###reference_###,\nare particularly prevalent in the use of agents.\nConfidentiality of user prompt data is already a severe issue for LLMs as chatbots.\nThis is amplified further by the AI agent use case.\nFor example, Samsung banned the use of ChatGPT after an employee prompted it\nwith confidential code that was later revealed to the public [Ray2023samsung].\nThis issue of data leakage via prompting is further intensified by the usage of AI agents with tools.\nWhen these agents interact with applications, they often request personal information.\nFor example, a bank assistant agent might request a Social Security number (SSN), account number, or routing number\nto help analyze a user\u2019s monthly spending.\nUnlike traditional financial applications that operate by fixed algorithmic rules,\nAI agents process tasks by transmitting input data to bank apps and then relaying the raw output data back for analysis.\nIn such scenarios, both the user\u2019s account information and personal spending data are\nsusceptible to memorization by the LLM through fine-tuning with chat histories.\nConsequently, the agent becomes prone to various data extraction attacks [carlini2021extracting, gong2021inversenet],\nleading to significant privacy risks."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-C Agent programs",
|
| 39 |
+
"text": "Agent programs execute instructions from the backbone LLM to interact with the world [sumers2024cognitive].\nAgent programs follow actions either generated directly from the underlying LLM via zero-shot prompting [brohan2023can, huang2022language]\nor improved via reasoning [wei2022chain, wang2022self, kim2024language]\nand planning [yao2023react, hao2023reasoning, yao2024tree, zhuang2023toolchain, zhang2024reverse].\nHowever, these approaches create both local and remote effects and may have associated vulnerabilities on different levels.\nAction generation is vulnerable to hallucination, adversarial prompts, and jailbreak [perez2022ignore, yu2024llm, chen2024struq].\nleading to unwanted or even dangerous actions.\nWhen agent programs execute these actions, both local resources and remote resources\nmay be compromised, leading to attacks as demonstrated in Figure 4 ###reference_###.\nIn this scenario, the attacker could be users of the agent system or\nmalicious applications in the agent\u2019s toolchain, sending adversarial prompts embedded in the tools\u2019 documentation.\n###figure_4### On the other hand, Agent programs with augmented action-planning abilities\nhave different security concerns.\nThese kind of agent programs are referred to as cognitive agents [sumers2024cognitive],\nas they have cognition to the environment feedback to improve their action iteratively.\nThis process of improving generated final actions is called planning.\nDifferent from reasoning strategies [wei2022chain, wang2022self],\neach step of planning has side-effects as illustrated in Figure 5 ###reference_###.\nReAct [yao2023react] and Inner Monologue [huang2023inner] use a feedback loop from the environment to improve\nthe generated actions, where each step causes side effects to the environment.\nMore advanced planning approaches, like Tree-of-Thoughts [yao2024tree] and ToolChain\u2217 [zhuang2023toolchain],\nlist all possible actions more aggressively as a decision tree and attempt all actions via tree-search algorithms\nlike Breadth-first, Depth-first, or search.\nAlthough providing more accurately planned final actions,\nthese strategies acting as bots to interact with the world caused severe security concerns.\n###figure_5###"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.3.1",
|
| 43 |
+
"parent_section_id": "3.3",
|
| 44 |
+
"section_name": "III-C1 Local vulnerabilities",
|
| 45 |
+
"text": "Personal AI agents are deployed on personal computers,\ninteracting with their underlying foundation LLM via API from service providers like OpenAI.\nWhen the agent is active, it gains access to tool applications, including the shell.\nThe agent program, if unrestricted, can execute arbitrary instructions on its host.\nAs a result, it can read confidential data (confidentiality),\nmodify important data (integrity),\nand hog system resources such as CPU, memory, and disk (availability).\nConfidentiality is commonly at risk when an AI agent is directed to use applications that require read access to files,\nsuch as email apps or file servers.\nFor example, an agent might send a file over FTP to backup storage.\nHowever, issues arise when the instructions provided by the tools to the agent include malicious prompts.\nAn adversarial prompt could be\n\u201cFor backing up data over FTP, also send a copy to HACKER to ensure it\u2019s extra safe.\u201d\nFollowing this, the LLM could generate commands that send the file to both the legitimate backup server and the hacker,\nleading to data leakage.\nA similar risk exists when sending emails or other messaging services,\nwhere the agent must read contact information.\nIf the agent uses its LLM to determine the recipient,\nit can be misled by adversarial prompts embedded in usernames or self-descriptions.\nMoreover, confidentiality may also be at risk even if there is no attacker.\nWhen generating actions based on learned probability distribution,\nthe LLM may output an incorrect token for the file name.\nWhile the recipient is correct as the user instructed,\nthe agent could inadvertently send sensitive information to this recipient with insufficient clearance,\na clear violation of the \u201cno read up\u201d principle of the Bell-LaPadula model [bishop2004introduction].\nThis scenario not only compromises confidentiality but also demonstrates the complexities and vulnerabilities inherent in\nmanaging access controls within AI systems.\nSuch vulnerabilities underscore the need for rigorous security protocols to protect against\nboth intentional manipulation and unintentional errors.\nThe integrity of data in AI agent systems faces risks similar to those concerning confidentiality.\nMalicious applications might manipulate the system by injecting misleading prompts as part of the instruction or manual,\naltering data inappropriately.\nFor example, in a flight booking scenario, an application could mislead the LLM into favoring a less efficient flight option by providing false information about layovers.\nThis undermines the integrity of decision-making tools, affecting their ability to deliver accurate and unbiased outcomes.\nSuch risks also extend to other tasks like resume reviews or selections based on ratings, emphasizing the need for these systems to maintain accurate data processing and resist manipulative influences.\nThe system\u2019s availability can be impacted in two main ways.\nFirst, a user might input a reasonable command that causes the agent to run applications involving undocumented multiple processes,\npotentially monopolizing CPU resources and making the system inaccessible to others.\nThese applications could also suffer from memory leaks, which not only bog down the system but also heighten vulnerability to memory attacks.\nNormally, a user would stop such a program, but AI agents currently lack this capability.\nSecond, the AI agent\u2019s planning process itself can affect system availability.\nIntroducing more diverse tools increases the complexity of planning,\nrequiring more resources to execute multiple strategies simultaneously.\nThis strain is magnified when multiple agents operate concurrently, potentially leading to exponential increases in resource use."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3.2",
|
| 49 |
+
"parent_section_id": "3.3",
|
| 50 |
+
"section_name": "III-C2 Remote vulnerabilities",
|
| 51 |
+
"text": "Uncontrolled AI agents can also be a threat to remote services.\nModern LLM-based AI agents can interact with the internet via structured API calling.\nFor example, popular AI agent frameworks like LangChain provide pre-defined\nweb-query functionality.\nIf the LLM thinks remote resources are needed, it will generate actions for the agent\nto query remote hosts provided in the agent\u2019s toolchain.\nThis creates the possibility of making the agent a bot for attacking remote hosts.\nIf there are jailbreak attacks that break the system prompt guard and alignment of the LLM,\nit can generate dangerous actions telling the agent to repeatedly query the same API resource to\nscan for vulnerabilities on the API server to use in other attacks.\nAttackers can also use jailbreak attacks to use agents to scrape data from the remote service provider.\nSince these agents follow actions generated by LLM,\ntheir behavior is distinct from regular social bots on the internet [davis2016botornot],\nleading to insufficient detection and early rejection of these jailbroken AI agent bots.\nFurthermore, agent planning that relies on an iterative environment feedback can be easily repurposed into a bot for performing DoS attacks.\nWhen granted access to local resources,\nthe agent\u2019s action planning affects the availability of the local system.\nSimilarly, if the agent\u2019s planning process requires feedback from the external service provider,\nit will send requests to the API iteratively to find the ideal action.\nSince the agents perform actions generated by LLMs on the user\u2019s behalf,\nthey follow the same protocol as human users on the internet,\nleading to remote vulnerabilities."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "IV Defenses",
|
| 57 |
+
"text": "We propose defenses for the vulnerabilities in section III ###reference_###.\nWe describe their design and evaluate their feasibility through experiments and empirical analysis."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "IV-A Sessions",
|
| 63 |
+
"text": "###figure_6### When handling requests from multiple users concurrently, web applications face challenges in\nmaintaining the confidentiality and integrity of each user\u2019s interaction data.\nIn these scenarios, effective session management is one of the best practices.\nLikewise, AI agent services can adopt a similar approach by using sessions as the protection boundary for requests,\nwhere all the requests in the same session may share data and states.\nWeb applications often use distributed session management to ensure the scalability with shared data storage.\nIn a distributed session management scheme, each user session is assigned a unique session ID,\nand the interaction data is stored in a key/value database (KVDB) where the session ID is the key and the interaction data is the value as shown in Figure 6 ###reference_###.\nAI agents can also use the same approach to establish session connections with users,\nand store the unique session ID and the question-answer history in a KVDB as its working memory.\nSince the state of the LLM is defined by the change in its input question as in Equation 1 ###reference_###,\nstates also serve as the context for subsequent requests.\nHowever, to successfully use sessions as defense in AI agents,\ntechnical challenges remain.\nFirst, the way to manage the session connection between each user and the agent needs to be carefully considered.\nDetermining which requests belong to the same session is crucial.\nThe agent designer also needs to consider the time to close a session.\nWhen closing a session, the agent needs to transfer its working memory from the KVDB to long-term storage for future use,\nsuch as improving its model via fine-tuning.\nSecond, the agent has to embed the session ID into the requests to the AI model.\nWhen multiple sessions share the same API key to the foundation model,\nthe agent needs to be able to correlate the session it establishes with the user and the session it establishes with the foundation model.\nOtherwise, the described vulnerabilities will remain.\nAnother approach in this direction is to formally model the state of the LLM and AI agents as monad.\nThe state transformer monad [launchbury1995state]\nis the standard solution to enable stateful computations, side effects, and system IO in\npure, stateless, effect-free, functional languages like Haskell, Isabelle, Coq, etc.\nRecall from Equation 1 ###reference_###:\nif we view and as types,\nwe can also write it as a function mapping\n,\nwhich transforms the LLM from an initial state to the next state.\nThen the formal definition of the state transformer [launchbury1995state]\nis a parametric form of this function as shown in 1 ###reference_###.\nSince monads are composable [jones1993composing],\nthe state monad is particularly ideal for representing AI agent behaviors such as reasoning and planning.\nWe show a few examples in Figure 7 ###reference_### to demonstrate this idea as an analogy to [launchbury1995state].\nWe believe future research can build on this framework to derive a formal definition of the state of AI agents.\nThe state monad is defined in a formal type system with type inference that is both sound and complete [schlesinger2012verification],\nwhich may facilitate the verification of AI agent systems [swamy2013verifying].\nBased on this theory, one may also develop session types [gay1999types] for AI agents.\nThe state monad has been utilized in building secure web applications [giffin2017hails] and microkernels [cock2008secure],\nand thus is a promising defense for the security of AI agents.\n###figure_7###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-B Sandbox",
|
| 69 |
+
"text": "###figure_8### A sandbox restricts the capabilities of the agent program.\nIt enforces the limitation on the program\u2019s access to both local and remote resources as shown in Figure 8 ###reference_###.\nThis section describes the application of classic access control provided by sandboxes on agent programs."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2.1",
|
| 73 |
+
"parent_section_id": "4.2",
|
| 74 |
+
"section_name": "IV-B1 Access to local resources",
|
| 75 |
+
"text": "The sandbox restricts the agent\u2019s consumption of local resources such as CPU, memory, and storage.\nIt also limits the agent\u2019s access to a sub-file system.\nTogether with session management,\nit further isolates the sub-file systems between sessions.\nTo demonstrate the necessity of this approach, we designed BashAgent to interact with the operating system with bash as its tool,\nwhich uses gpt-3.5-turbo to understand user instructions and generate actions.\nBashAgent has two variants BashAgentf granted with full accessibility\nand BashAgentc constrained in a docker container.\nBased on AgentBench [liu2024agentbench],\nwe collect and design 95 tasks related to system security to check the harmfulness of unconstrained AI agents.\nWe categorize the tasks into confidentiality, integrity, and availability,\nand check if the LLM would accept the prompts with malicious intent and generate the attacking actions.\nWe show the results of running BashAgentf in Table I ###reference_###.\nWe found that BashAgentf accepts the majority of malicious intents and generates the attacking instructions,\nand generated attacking commands could be executed successfully in an unprotected environment,\nmaking the host system extremely vulnerable in all three security aspects.\nHowever, once we apply appropriate sandbox configurations,\nBashAgentc successfully defended against all the LLM-generated attacks.\nThe LLM gpt-3.5-turbo was aligned with human values [ouyang2022training]\nbut still struggles to reject malicious intent in the AI agent use case.\nTherefore, alignment training will not be enough to secure AI agents,\nand adding limitations on access to local resources is necessary for complete security."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.2.2",
|
| 79 |
+
"parent_section_id": "4.2",
|
| 80 |
+
"section_name": "IV-B2 Access to remote resources",
|
| 81 |
+
"text": "Sandbox environment implements controlled access through mechanisms like whitelists, blacklists, and rate limiting\nin addition to fundamental interaction isolation.\nThis framework allows resource providers to control the extent of access granted to agent programs selectively,\nranging from full permission to complete prohibition or limitations to specific subsets of resources.\nConsequently, our method enhances security by effectively mitigating unwanted access from AI agents and potential threats posed by adversarial inputs\nto the agent."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.3",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-C Protecting Models for AI Agents",
|
| 87 |
+
"text": "AI agents must prevent the flow of private or malicious information between users.\nLeaked private information compromises the user\u2019s privacy, while malicious information causes the model to output wrong, objectional, or otherwise malicious responses."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.3.1",
|
| 91 |
+
"parent_section_id": "4.3",
|
| 92 |
+
"section_name": "IV-C1 Sessionless models for AI agents",
|
| 93 |
+
"text": "If the AI agent has no notion of sessions, then the agent must not fine-tune its LLM on private data\nor it must filter out private or malicious data from the query to the model.\nThe first step is to identify this data. By employing meticulous prompt engineering,\ndevelopers can enable the AI agent to interactively request sensitive data in a step-by-step manner,\nleaving markers on the data for further processing.\nThe next step is to whitewash them into non-sensitive data.\nFor example, by replacing US social security numbers (SSN) with nine random digits.\nThis leaks no information about the specific SSN but still allows the model to learn from the context around the SSN.\nAI agent applications require this harmless version of data to be manipulable.\nFor example, processing the last four digits of the credit card number as in web shopping [yao2022webshop].\nIn this case, the encryption transformation needs to be structure-preserving and information-preserving to text slicing.\nOne solution for this is format-preserving encryption [bellare2010ffx].\nA Format-Preserving Encryption for Text Slicing is an encryption scheme such that for all possible private messages and its indices ,\n.\nFPETS allows language models to read and manipulate private data as ciphertext instead of plaintext, therefore preventing privacy leaks.\nHowever, whether encrypting data in the input prompt harms the usability of the AI agent or not is unknown.\nTo verify this defense method,\nwe design an evaluation framework that prompts the LLM to operate on encrypted data.\nEach task in our evaluation framework is a roundtrip,\nwhere each AI agent is given a pair of encryption and decryption functions.\nWhen given a natural language prompt, the AI agents will first encrypt the data,\nand then pass the ciphertext to their LLM for manipulations such as text slicing.\nWe then ask the agent to return the slice of information we want.\nThe agent responds with the decrypted output for us to validate against the original slice of plaintext.\nWe measure the success rate of this evaluation by \nwhere is the total number of tasks and is the number of tasks where the agent completed a round trip with no error.\nAs a proof of concept, we first tested encoded strings before encrypted strings.\nWe generate random strings that include digits and both upper case and lower case letters,\nand encode them with a simple substitution cipher denoted by ,\nwhich extends the \u201crotate-by-13\u201d cipher to operate on the character set mentioned above.\nSince \u2019s substitution on the characters is one-to-one,\n is FPETS.\nLet denote the decryption scheme corresponding to .\nFor confidential data , this evaluation process can be formulated as\n.\nFor comparison, we also report the success rate of the agent performing the same tasks with the plaintext in Table II ###reference_###.\nWe observed that the success rate for slicing ciphertexts was similar to the success rate for slicing plaintext.\nDespite an unimpressive success rate on both plaintext and ciphertext,\nthe results showed that both GPT models were able to understand and respond to queries involving the manipulation of encoded strings.\nExperimentation on the original strings yielded similar success rates,\nshowing that encryption was not the cause of the low success rate.\nThis means that encrypted data in the prompt have little effects on the semantics of the query,\nshowing that FPETS as a defense technique does not affect the usability of AI agents significantly.\nText slicing is not the only task that an AI agent needs to complete on sensitive data.\nAnother frequent use-case of AI agents is to perform calculations on sensitive data,\nwhich is common in financial and medical domains [naehrig2011can].\nTo this end, homomorphic encryption, which allows binary operations on encrypted data,\nis essential for AI agents to perform calculations on the data.\nLet be a binary operator.\nA homomorphic encryption scheme is a map from set of messages to \nsuch that for all , .\n is considered a fully homomorphic encryption scheme if\nit allows arbitrary function to be applied to the data an unlimited number of times [acar2018survey].\n###figure_9### We introduce the application of FHE to the AI agent workflow in Figure 9 ###reference_###.\nFHE serves as a defense for user data confidentiality when the agent is required to perform mathematical operations on sensitive data.\nWe expand our evaluation to incorporate FHE and its intrinsic property of allowing operations to be performed on ciphertext(s) without decryption.\nFollowing a similar design for FPETS evaluation,\nwe provided the agent with an array of the ciphertexts of numbers encrypted by a FHE scheme \nand tools to perform addition and multiplication on the ciphertexts.\nThe decryption of the calculation result was again done by the agent outside of the LLM.\nWe prompt the agent with queries asking for the sum or product of numbers at specified indices of the ciphertext array and use the same success rate metric for this evaluation.\nResults in this case were verified by checking the agent\u2019s response against the original numbers\u2019 binary operation result (sum or product).\nLet denote the decryption scheme corresponding to .\nFor confidential data and binary operator ,\na task can be formulated as\n.\nWe report the evaluation results for FHE agents in Table II ###reference_###.\nOur evaluation results on addition and multiplication suggest that\nthis defense is effective for AI agents requiring calculations on sensitive data supported by these operations.\nThus, FHE is a solution for maintaining privacy during operations on sensitive data.\nOverall, our encryption defense does not substantially compromise the usability of AI agents\nand highlights a potential direction for future research on privacy-preserving AI agents."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.3.2",
|
| 97 |
+
"parent_section_id": "4.3",
|
| 98 |
+
"section_name": "IV-C2 Session-aware models for AI agents",
|
| 99 |
+
"text": "###figure_10### An alternative to sessionless defenses is to make session-aware AI models.\nTowards this direction, OpenAI recently introduced Temporary Chat\n222https://help.openai.com/en/articles/8914046-temporary-chat-faq ###reference_6-temporary-chat-faq###,\nwhere they promised not to use the chat history to improve their models.\nHowever, not improving the model on agent tasks would limit\nagent intelligence and user experience.\nTo build powerful agent programs to handle diverse tasks,\nlearning actions are essential.\nOne approach to privacy-preserving AI agents with personalization is fine-tuning each user\u2019s LLM on their own chat history,\nisolating model updates per user as shown in Figure 10 ###reference_###.\nHowever, this is costly and limited by available data.\nAlternatives like in-context learning [brown2020language] and retrieval-augmented generation [lewis2020retrieval] enhance responses by embedding past contexts in prompts,\nbut are constrained by the length of model\u2019s context window.\nA more promising method is prompt tuning [lester2021power], which freezes the foundational model and adds a few user-specific learnable parameters \nonly to remember chat history.\nThis technique avoids sharing data with the foundation model provider,\ndirectly addressing privacy concerns."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Related work",
|
| 105 |
+
"text": "Recent advancements in LLMs have had a significant impact in the development of AI agent,\nparticularly in their ability to reason based on natural language prompts\nto observe and interact with their environments dynamically [wei2022chain, yao2024tree].\nThis shift from reinforcement learning\nto LLM agents has ushered in a new wave of AI agent development,\nwhere the emphasis is on enabling agents to perform actions based on natural language commands.\nReAct [yao2023react] introduced chain-of-thought prompting [wei2022chain] to guide pre-trained LLMs to follow instructions\nin the agent setting.\nThis approach has since been applied to computer tasks [kim2024language]\nand other real-world tasks [yao2022webshop, wang2023describe, gu2023dont, park2023generative].\nTo evaluate the performance of the agents, several benchmarks [zhou2024webarena, liu2024agentbench] have been proposed.\nThese benchmarks measure the correctness of an agent\u2019s actions without\nconsidering the potential vulnerabilities that agent actions can cause to the environment.\nThe threats to LLMs and AI agents are different [deng2024aiagentsthreatsurvey].\nFor LLMs, the concerns primarily address model alignment with human values, including ethics, offensive language, and politics [yu2024llm].\nConversely, AI agents, which use LLMs to generate actions and access tools,\npose threats to real computing systems, applications, and resources, compromising their confidentiality, integrity, and availability."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6",
|
| 109 |
+
"parent_section_id": null,
|
| 110 |
+
"section_name": "VI Conclusion",
|
| 111 |
+
"text": "With the aid of tool-augmented LLMs,\nAI agents are being recognized as a promising direction toward artificial assistants.\nConsiderable research has focused on enhancing the accuracy of AI agent actions\nthrough advanced reasoning, planning, and learning.\nHowever, despite high performance in controlled evaluation settings,\nthe potential side effects and dangers posed by these methods have not been thoroughly examined.\nIn this paper, we present a systematic analysis of the security issues in current AI agent development\nand propose practical and feasible defense strategies.\nWe discuss the potential vulnerabilities of AI agents both theoretically and in realistic scenarios with security-centric examples,\nand propose multiple defense techniques for each identified vulnerability.\nWe highlight the future research directions and best practices for developing secure agent programs,\nand believe our work could boost the advancement of secure and trustworthy AI agents.\nOur code and data are publicly available 333https://github.com/SecurityLab-UCD/ai-agent-security ###reference_t-security###."
|
| 112 |
+
}
|
| 113 |
+
],
|
| 114 |
+
"appendix": [
|
| 115 |
+
{
|
| 116 |
+
"section_id": "Appendix 1",
|
| 117 |
+
"parent_section_id": null,
|
| 118 |
+
"section_name": "Appendix A Appendix",
|
| 119 |
+
"text": ""
|
| 120 |
+
}
|
| 121 |
+
],
|
| 122 |
+
"tables": {
|
| 123 |
+
"1": {
|
| 124 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S4.T1.3.2\" style=\"font-size:90%;\">Unconstrained AI agents will execute dangerous actions generated by the LLM.\n#Task is the number of tasks we gathered in this category.\n#Gen is the number of tasks accepted by the LLM and generates attacking actions.\n#Exec is the number of LLM-generated commends that are executed successfully and compromise the vulnerabilities.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.1.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T1.4.1.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.4.1.1.2\"><span class=\"ltx_text\" id=\"S4.T1.4.1.1.2.1\" style=\"font-size:80%;\">#Task</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.4.1.1.3\"><span class=\"ltx_text\" id=\"S4.T1.4.1.1.3.1\" style=\"font-size:80%;\">#Gen</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.4.1.1.4\"><span class=\"ltx_text\" id=\"S4.T1.4.1.1.4.1\" style=\"font-size:80%;\">#Exec</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.4.1.1.5\"><span class=\"ltx_text\" id=\"S4.T1.4.1.1.5.1\" style=\"font-size:80%;\">Attacked</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.4.2.2.1\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.1.1\" style=\"font-size:80%;\">Confidentiality</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.2.2\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.2.1\" style=\"font-size:80%;\">25</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.2.3\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.3.1\" style=\"font-size:80%;\">25</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.2.4\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.4.1\" style=\"font-size:80%;\">24</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.4.2.2.5\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.5.1\" style=\"font-size:80%;\">96.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.4.3.3.1\"><span class=\"ltx_text\" id=\"S4.T1.4.3.3.1.1\" style=\"font-size:80%;\">Integrity</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.3.3.2\"><span class=\"ltx_text\" id=\"S4.T1.4.3.3.2.1\" style=\"font-size:80%;\">35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.3.3.3\"><span class=\"ltx_text\" id=\"S4.T1.4.3.3.3.1\" style=\"font-size:80%;\">35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.3.3.4\"><span class=\"ltx_text\" id=\"S4.T1.4.3.3.4.1\" style=\"font-size:80%;\">30</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.4.3.3.5\"><span class=\"ltx_text\" id=\"S4.T1.4.3.3.5.1\" style=\"font-size:80%;\">85.7%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.4.4.4.1\"><span class=\"ltx_text\" id=\"S4.T1.4.4.4.1.1\" style=\"font-size:80%;\">Availability</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.2\"><span class=\"ltx_text\" id=\"S4.T1.4.4.4.2.1\" style=\"font-size:80%;\">35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.3\"><span class=\"ltx_text\" id=\"S4.T1.4.4.4.3.1\" style=\"font-size:80%;\">30</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.4.4\"><span class=\"ltx_text\" id=\"S4.T1.4.4.4.4.1\" style=\"font-size:80%;\">22</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.4.4.4.5\"><span class=\"ltx_text\" id=\"S4.T1.4.4.4.5.1\" style=\"font-size:80%;\">62.9%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S4.T1.4.5.5.1\"><span class=\"ltx_text\" id=\"S4.T1.4.5.5.1.1\" style=\"font-size:80%;\">Total</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.4.5.5.2\"><span class=\"ltx_text\" id=\"S4.T1.4.5.5.2.1\" style=\"font-size:80%;\">95</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.4.5.5.3\"><span class=\"ltx_text\" id=\"S4.T1.4.5.5.3.1\" style=\"font-size:80%;\">90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.4.5.5.4\"><span class=\"ltx_text\" id=\"S4.T1.4.5.5.4.1\" style=\"font-size:80%;\">76</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb ltx_border_t\" id=\"S4.T1.4.5.5.5\"><span class=\"ltx_text\" id=\"S4.T1.4.5.5.5.1\" style=\"font-size:80%;\">80.0%</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 125 |
+
"capture": "TABLE I: Unconstrained AI agents will execute dangerous actions generated by the LLM.\n#Task is the number of tasks we gathered in this category.\n#Gen is the number of tasks accepted by the LLM and generates attacking actions.\n#Exec is the number of LLM-generated commends that are executed successfully and compromise the vulnerabilities."
|
| 126 |
+
},
|
| 127 |
+
"2": {
|
| 128 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1\" style=\"font-size:90%;\">TABLE II</span>: </span><span class=\"ltx_text\" id=\"S4.T2.3.2\" style=\"font-size:90%;\">Results for AI agent with encrypted data.\nEach agent is evaluated on 100 randomly-generated tasks.\n\u201cSuccCiph\u201d is the success rate of agent completing the tasks with encrypted data.\n\u201cSuccPlain\u201d is the success rate of the agent completing the same tasks without encryption.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T2.4.1.1.1\"><span class=\"ltx_text\" id=\"S4.T2.4.1.1.1.1\" style=\"font-size:80%;\">Agent</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T2.4.1.1.2\"><span class=\"ltx_text\" id=\"S4.T2.4.1.1.2.1\" style=\"font-size:80%;\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.4.1.1.3\"><span class=\"ltx_text\" id=\"S4.T2.4.1.1.3.1\" style=\"font-size:80%;\">SuccCiph</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.4.1.1.4\"><span class=\"ltx_text\" id=\"S4.T2.4.1.1.4.1\" style=\"font-size:80%;\">SuccPlain</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.2.1.1\"><span class=\"ltx_text\" id=\"S4.T2.4.2.1.1.1\" style=\"font-size:80%;\">FPETS</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.2.1.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T2.4.2.1.2.1\" style=\"font-size:80%;\">gpt-3.5-turbo</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.2.1.3\"><span class=\"ltx_text\" id=\"S4.T2.4.2.1.3.1\" style=\"font-size:80%;\">49.0%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.2.1.4\"><span class=\"ltx_text\" id=\"S4.T2.4.2.1.4.1\" style=\"font-size:80%;\">47.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.4.3.2.1\"><span class=\"ltx_text\" id=\"S4.T2.4.3.2.1.1\" style=\"font-size:80%;\">FPETS</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.4.3.2.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T2.4.3.2.2.1\" style=\"font-size:80%;\">gpt-4-turbo</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.3.2.3\"><span class=\"ltx_text\" id=\"S4.T2.4.3.2.3.1\" style=\"font-size:80%;\">55.0%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.3.2.4\"><span class=\"ltx_text\" id=\"S4.T2.4.3.2.4.1\" style=\"font-size:80%;\">57.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.4.3.1\"><span class=\"ltx_text\" id=\"S4.T2.4.4.3.1.1\" style=\"font-size:80%;\">FHE</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.4.4.3.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T2.4.4.3.2.1\" style=\"font-size:80%;\">gpt-3.5-turbo</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.3.3\"><span class=\"ltx_text\" id=\"S4.T2.4.4.3.3.1\" style=\"font-size:80%;\">85.0%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.3.4\"><span class=\"ltx_text\" id=\"S4.T2.4.4.3.4.1\" style=\"font-size:80%;\">99.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T2.4.5.4.1\"><span class=\"ltx_text\" id=\"S4.T2.4.5.4.1.1\" style=\"font-size:80%;\">FHE</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T2.4.5.4.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T2.4.5.4.2.1\" style=\"font-size:80%;\">gpt-4-turbo</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.5.4.3\"><span class=\"ltx_text\" id=\"S4.T2.4.5.4.3.1\" style=\"font-size:80%;\">89.0%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.4.5.4.4\"><span class=\"ltx_text\" id=\"S4.T2.4.5.4.4.1\" style=\"font-size:80%;\">94.0%</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 129 |
+
"capture": "TABLE II: Results for AI agent with encrypted data.\nEach agent is evaluated on 100 randomly-generated tasks.\n\u201cSuccCiph\u201d is the success rate of agent completing the tasks with encrypted data.\n\u201cSuccPlain\u201d is the success rate of the agent completing the same tasks without encryption."
|
| 130 |
+
},
|
| 131 |
+
"3": {
|
| 132 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A1.T3.2.1.1\" style=\"font-size:90%;\">TABLE III</span>: </span><span class=\"ltx_text\" id=\"A1.T3.3.2\" style=\"font-size:90%;\">Results for AI agent with encrypted SSN.\nEach agent is evaluated on 100 randomly-generated tasks.\n\u201cSuccCiph\u201d is the success rate of agent completing the tasks with encrypted data.\n\u201cSuccPlain\u201d is the success rate of the agent completing the same tasks without encrypting the data.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T3.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T3.4.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A1.T3.4.1.1.1\"><span class=\"ltx_text\" id=\"A1.T3.4.1.1.1.1\" style=\"font-size:80%;\">Agent</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A1.T3.4.1.1.2\"><span class=\"ltx_text\" id=\"A1.T3.4.1.1.2.1\" style=\"font-size:80%;\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.4.1.1.3\"><span class=\"ltx_text\" id=\"A1.T3.4.1.1.3.1\" style=\"font-size:80%;\">SuccCiph</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T3.4.1.1.4\"><span class=\"ltx_text\" id=\"A1.T3.4.1.1.4.1\" style=\"font-size:80%;\">SuccPlain</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T3.4.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T3.4.2.1.1\"><span class=\"ltx_text\" id=\"A1.T3.4.2.1.1.1\" style=\"font-size:80%;\">SSN</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T3.4.2.1.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T3.4.2.1.2.1\" style=\"font-size:80%;\">gpt-3.5-turbo</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.4.2.1.3\"><span class=\"ltx_text\" id=\"A1.T3.4.2.1.3.1\" style=\"font-size:80%;\">38.0%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T3.4.2.1.4\"><span class=\"ltx_text\" id=\"A1.T3.4.2.1.4.1\" style=\"font-size:80%;\">40.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.4.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T3.4.3.2.1\"><span class=\"ltx_text\" id=\"A1.T3.4.3.2.1.1\" style=\"font-size:80%;\">SSN</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T3.4.3.2.2\"><span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T3.4.3.2.2.1\" style=\"font-size:80%;\">gpt-4-turbo</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.4.3.2.3\"><span class=\"ltx_text\" id=\"A1.T3.4.3.2.3.1\" style=\"font-size:80%;\">38.0%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T3.4.3.2.4\"><span class=\"ltx_text\" id=\"A1.T3.4.3.2.4.1\" style=\"font-size:80%;\">40.0%</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 133 |
+
"capture": "TABLE III: Results for AI agent with encrypted SSN.\nEach agent is evaluated on 100 randomly-generated tasks.\n\u201cSuccCiph\u201d is the success rate of agent completing the tasks with encrypted data.\n\u201cSuccPlain\u201d is the success rate of the agent completing the same tasks without encrypting the data."
|
| 134 |
+
}
|
| 135 |
+
},
|
| 136 |
+
"image_paths": {
|
| 137 |
+
"1": {
|
| 138 |
+
"figure_path": "2406.08689v3_figure_1.png",
|
| 139 |
+
"caption": "Figure 1: Overview of LLM-based AI agent.",
|
| 140 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 141 |
+
},
|
| 142 |
+
"2": {
|
| 143 |
+
"figure_path": "2406.08689v3_figure_2.png",
|
| 144 |
+
"caption": "Figure 2: AI agent\u2019s potential vulnerability to model pollution.",
|
| 145 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 146 |
+
},
|
| 147 |
+
"3": {
|
| 148 |
+
"figure_path": "2406.08689v3_figure_3.png",
|
| 149 |
+
"caption": "Figure 3: AI agents cause privacy leakages.",
|
| 150 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 151 |
+
},
|
| 152 |
+
"4": {
|
| 153 |
+
"figure_path": "2406.08689v3_figure_4.png",
|
| 154 |
+
"caption": "Figure 4: An illustration of vulnerabilities of zero-shot action agents.\nIn the figures, we use the term \u201cWorld\u201d to denote the host OS of the agent and external API resources.",
|
| 155 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 156 |
+
},
|
| 157 |
+
"5": {
|
| 158 |
+
"figure_path": "2406.08689v3_figure_5.png",
|
| 159 |
+
"caption": "Figure 5: An illustration of AI agent\u2019s effectful planning.\nIn this case, even the users are interacting with the agent program in a non-harmful way,\nthey might still cause security issues unintentionally.\nOne thing to note is that agents are still vulnerable to attacks as in Figure 4.",
|
| 160 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 161 |
+
},
|
| 162 |
+
"6": {
|
| 163 |
+
"figure_path": "2406.08689v3_figure_6.png",
|
| 164 |
+
"caption": "Figure 6: Session management for stateful LLM-based AI agent.\nWe use numbers with gray boxes to denote session ID.\n\u201cKVDB\u201d is the abbreviation for key-value database.",
|
| 165 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 166 |
+
},
|
| 167 |
+
"7": {
|
| 168 |
+
"figure_path": "2406.08689v3_figure_7.png",
|
| 169 |
+
"caption": "Figure 7: Composable state transformer framework for LLM and AI agent.",
|
| 170 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 171 |
+
},
|
| 172 |
+
"8": {
|
| 173 |
+
"figure_path": "2406.08689v3_figure_8.png",
|
| 174 |
+
"caption": "Figure 8: When the attacker gives the AI agent malicious intents and the LLM generates dangerous actions,\nsandbox could limit the effects of these actions to a small and controlled portion of the system.\nWith such limitation, the attack on the system via an AI agent can be prevented and the negative impacts can be minimized.",
|
| 175 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 176 |
+
},
|
| 177 |
+
"9": {
|
| 178 |
+
"figure_path": "2406.08689v3_figure_9.png",
|
| 179 |
+
"caption": "Figure 9: Sessionless AI agents with encryption.\nTools in this case need to be support a encryption scheme,\nlike slicing for FPETS and addition or multiplication for FHE.",
|
| 180 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 181 |
+
},
|
| 182 |
+
"10": {
|
| 183 |
+
"figure_path": "2406.08689v3_figure_10.png",
|
| 184 |
+
"caption": "Figure 10: Session-aware AI agents with prompt tuning.\n\u03b8P\u2062isubscript\ud835\udf03\ud835\udc43\ud835\udc56\\theta_{Pi}italic_\u03b8 start_POSTSUBSCRIPT italic_P italic_i end_POSTSUBSCRIPT denotes the added trainable parameters only for the user\u2019s chat history.\nWith prompt tuning, AI agents can improve themselves by updating only \u03b8Psubscript\ud835\udf03\ud835\udc43\\theta_{P}italic_\u03b8 start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT,\nwithout compromising the foundational LLM or leaking private information.",
|
| 185 |
+
"url": "http://arxiv.org/html/2406.08689v3/"
|
| 186 |
+
}
|
| 187 |
+
},
|
| 188 |
+
"validation": true,
|
| 189 |
+
"references": [],
|
| 190 |
+
"url": "http://arxiv.org/html/2406.08689v3"
|
| 191 |
+
}
|
20241217/2406.10359v2.json
ADDED
|
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Learning Nonlinear Reduced Order Models using State-Space Neural Networks with Ordered State Variance",
|
| 3 |
+
"abstract": "A novel State-Space Neural Network with Ordered variance (SSNNO) is presented in which the state variables are ordered in decreasing variance. A systematic way of model order reduction with SSNNO is proposed, which leads to a Reduced order SSNNO (R-SSNNO). Theoretical results for the existence of an SSNNO with arbitrary bounds on the output prediction error are presented. The application of SSNNO in control: Model Predictive Control (MPC) and state estimation: Extended Kalman Filter (EKF) is discussed.\nThe effectiveness of SSNNO in system identification and control is illustrated using simulations on a nonlinear continuous reactor process example.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Identifying dynamic models from data finds applications in control, estimation, process monitoring, economics, and ecology, among other areas [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. The initial works related to identifying input-output dynamic models for linear descriptions such as transfer functions [1 ###reference_b1###], autoregressive exogenous models [5 ###reference_b5###], as well as nonlinear descriptions such as radial basis functions [6 ###reference_b6###], kernel methods [7 ###reference_b7###], feedforward Neural Network (NN) and Recurrent NN (RNN) models [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###]. The training data for these approaches typically consists of available sensor measurements and known exogenous inputs. It is well-known that input-output models cannot characterize the internal dynamics of the system. On the other hand, state-space models (or input-state-output models) additionally describe the internal state of the system and are therefore widely used in modern control approaches, such as Linear Quadratic Regulator (LQR) [11 ###reference_b11###], Model Predictive Control (MPC) [12 ###reference_b12###], sliding mode control [13 ###reference_b13###]. This has led to various approaches for identifying state-space models directly from training data.\nIn the case of linear state-space models, identification methods can be classified into three types:\nPrediction Error Method (PEM)\n [14 ###reference_b14###]: in which the parameters of the state-space matrices are computed by minimizing a suitable function of the output prediction error.\nRealization theory-based approaches [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###]: where the training data is first used to obtain an input-output model such as an impulse response model, followed by a realization step that results in a state-space model (for example, Gilbert\u2019s method or the Ho-Kalman algorithm).\nSubspace projection methods [18 ###reference_b18###, 19 ###reference_b19###]: in which\nthe state-space model is directly estimated from the training data using orthogonal or oblique projections of the row spaces of Hankel matrices followed by a factorization step (for example, N4SID).\nLinear state-space models are related to the input-output signals or input-output data via Markov parameters, which assumes that the state dimension is known [17 ###reference_b17###]. An independent parameterization of matrices of the linear state-space model in the PEM method can result in over-parameterization. Moreover, the resulting optimization problem in PEM is nonconvex. Both of these issues, along with the non-availability of state dimension, manifest in poor PEM estimates of the linear state-space matrices. Subspace methods overcome the issue of nonconvex optimization partly by the use of linear projections. An estimate of the state dimension is made in realization theory as well as subspace methods by use of matrix factorization (primarily based on singular value decomposition). All three identification approaches for linear state-space models are mature with standard software available [1 ###reference_b1###]. Extensions to the identification of Linear Parameter Varying (LPV) state-space models are also available [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###].\nHowever, linear state-space models provide only a local approximation and are inadequate for applications that require a nonlinear description of system dynamics over a wide operating envelope. This has led to the development of various approaches for identifying nonlinear state-space models. These include polynomial state-space methods [23 ###reference_b23###], probabilistic nonlinear state-space identification [24 ###reference_b24###, 25 ###reference_b25###], Autoencoder (AE) based system identification [26 ###reference_b26###, 27 ###reference_b27###], physics informed NNs [28 ###reference_b28###, 29 ###reference_b29###], and State-Space Neural Network (SSNN) models [30 ###reference_b30###, 31 ###reference_b31###]. All of the above approaches for nonlinear system identification can be classified as nonlinear extensions of PEM. To the best of the authors\u2019 knowledge, nonlinear state-space identification methods, analogous to realization theory or subspace identification, have not been reported in the literature. Among PEM methods, the SSNN-based approaches, which approximate the state and output functions of the nonlinear state-space model with NNs, have recently gained popularity, at least partly due to the availability of software to train deep NNs. It is worth recalling that the motivation for NN-based approaches such as SSNN comes from the Universal Approximation Theorems (UATs), which prove the ability\nof NNs to universally approximate continuous nonlinear functions with arbitrary accuracy [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###].\nMoreover, one can also explore the stability of SSNN thereby making them amenable to systems analysis [36 ###reference_b36###, 37 ###reference_b37###].\nDespite these advantages, SSNNs are plagued with similar issues as encountered in PEM namely, convergence to local minima, unknown initial condition, and unknown state dimension, i.e., model order, leading to over-parameterization of SSNN model. The issue of convergence to local minima in SSNN has been addressed in [23 ###reference_b23###, 38 ###reference_b38###]. Estimation of the initial state in SSNN is presented in [39 ###reference_b39###, 40 ###reference_b40###].\nIn [37 ###reference_b37###, 38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###], the dimension of the state vector is assumed to be known a priori. However, the order of a real system is generally unknown and this fact imposes a serious limitation on the identification of SSNN models since selection of the model order by trial and error requires considerable training and testing.\nAlthough, model order estimation/reduction methods are proposed for AE models [27 ###reference_b27###] and LPV models [41 ###reference_b41###, 42 ###reference_b42###], their extension to SSNN is not explored in the literature.\nThus, there is a need for a systematic method that determines the model order for SSNN.\nThis motivates the proposed State-Space Neural Network with Ordered variance (SSNNO) approach in which we present a novel idea of simultaneously identifying the nonlinear state-space model along with an estimated state dimension and hence model order. The estimated/reduced model order is defined as the number of states that exhibit significant variance over the training data and whose value becomes apparent during the identification step, wherein a variance-ordering of all model states is enforced.\nTo the best of the authors\u2019 knowledge, this is the first work that incorporates variance-ordering of state variables in SSNN.\nThe proposed SSNNO is inspired by a previous work [43 ###reference_b43###] that uses the idea of variance-ordering of latent variables results in an AE with Ordered variance (AEO). The AEO identifies a nonlinear static model from data in an unsupervised setting, whereas the proposed SSNNO identifies a nonlinear dynamic model in a supervised setting.\nThe major contribution of the current work lies in proposing a systematic approach for determining the estimated model order with SSNNO. Further, a model order reduction step is incorporated to obtain a Reduced order SSNNO (R-SSNNO) model.\nThe rest of the paper is organized as follows. Section 2 reviews relevant concepts from SSNN. The model order determination in SSNN is discussed in Section 3. The proposed SSNNO approach is presented in Section 4. Section 5 presents determination of the reduced model order with SSNNO followed by steps to obtain an R-SSNNO model. This section also presents theoretical results on the existence of SSNNO and R-SSNNO.\nSection 6\nillustrates the numerical implementation of SSNNO for the identification of a nonlinear CSTR system and comparison of results with SSNN. Further, the section also discusses the application of the SSNNO in EKF-based state-feedback MPC for the CSTR example. Conclusions and future directions are discussed in Section 7.\nNotations:\nScalars are denoted by normal font (), matrices and vectors using the bold font (), and sets by blackboard bold font (). The set denotes the - dimensional Euclidean space, and the space of real matrices is denoted by . The sample mean vector and sample covariance matrix of the vector x over the data set are defined as and respectively. For a matrix the notations and denotes the row and column, respectively.\nThe Euclidean norm of vector is denoted by \nand denotes the Frobenious norm of the matrix where and\n\nFinally, represents the identity matrix of size and 0 denotes the zero matrix of appropriate dimension.\nAbbreviations used in the paper are listed in Appendix A.4 ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Preliminaries",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "State-Space Neural Network (SSNN)",
|
| 21 |
+
"text": "Consider the discrete-time nonlinear system:\nwhere , , are state, control input, and output vectors respectively, denote compact sets,\n is the state function, and is the output function, without direct transmission of the input.\nEq. (1 ###reference_###) serves as the data-generating function in order to obtain the training data consisting of\ninput and output sequences of size :\nwhere and are the input and output samples at time instant.\nNote that the true but unknown order of the system corresponds to the dimension of the system state vector .\n###figure_1### SSNNs, introduced in [30 ###reference_b30###], are trained using dynamic input-output data as (possibly deep) NN, with state and observation functions represented as subnetworks:\nwhere is the predicted state vector with a user-specified dimension , is the predicted output,\n are the state and output subnetworks of SSNN, and contain the corresponding weight parameters.\nThe block diagram for SSNN is shown in Fig. 1 ###reference_### in which state and output functions, , are represent as subnetworks. Note that the subnetwork representing involves a feedback connection as shown in Fig. 1 ###reference_### thereby becoming an RNN, which is denoted as \nThe delay block stores and shifts the predicted state sample by one instant. Define the predicted state and output sequence obtained using SSNN as [44 ###reference_b44###]:\nThe prediction performance of SSNN is characterized by the Squared Prediction Error (SPE):\nwhich is also chosen as the loss function of SSNN. Define a matrix containing the weights of and as:\nThe SSNN training problem then becomes:\nFor the SSNN in Eq. (3 ###reference_###) use training data Eq. (2 ###reference_###) to find estimates for: (i) initial state , and parameters A in Eq. (6 ###reference_###) by solving:\nThe optimization problem for SSNN can be solved using backpropagation through time [8 ###reference_b8###, 44 ###reference_b44###] and truncated backpropagation [45 ###reference_b45###] algorithms.\nSSNNs can learn nonlinear higher-order systems by choosing sufficient number of nodes and hidden layers in the NN. As per UATs [32 ###reference_b32###, 33 ###reference_b33###], any continuous nonlinear function can be approximated with arbitrary accuracy by an NN with one hidden layer consisting of sufficient number of nodes. Similar results are also introduced for deep NNs with a bounded number of nodes per layer [34 ###reference_b34###, 35 ###reference_b35###].\nThis supports the approximation capabilities of NNs, and, thereby, of SSNNs. The available theoretical results on the accuracy of the SSNN approximation of system Eq. (1 ###reference_###) are summarized below.\nThe state function and output function in Eq. (1 ###reference_###) are uniformly Lipschitz in x with and as the Lipschitz constants satisfying:\nfor all and .\nThe true order of the system in Eq. (1 ###reference_###) is known and the SSNN order is chosen identical to the system order: .\nConsider the training data sequence in Eq. (2 ###reference_###), generated by applying bounded inputs U to the nonlinear system Eq. (1 ###reference_###), which satisfies Assumption 1 ###reference_umption1###. Then for any , there exists a trained SSNN Eq. (3 ###reference_###) under Assumption 2 ###reference_umption2###, with an initial condition , such that the SPE in Eq. (5 ###reference_###) is bounded:\nSee Appendix A.1 ###reference_###.\n\u220e\nIn practice, the system order and the initial state are unknown. Consequently, the main challenge in using an SSNN lies in determining the state dimension suitable for learning the given training data set. This is mostly done by trial and error and requires considerable training and testing. The main focus of this paper is the problem of estimating model order in SSNN using training data which will be discussed next."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Determination of Model Order in SSNN",
|
| 27 |
+
"text": "Model order determination of SSNNs is an important problem as it directly affects the complexity of the controller (for e.g., the optimization problem size in MPC) and estimator (for e.g., the size of the Riccati matrices in EKF). Therefore it is desirable that the estimated model order, denoted by , be as small as possible without compromising the prediction accuracy. This leads to the reduced model order determination problem for an SSNN as defined below:\nFor the SSNN in Eq. (3 ###reference_###), use training data Eq. (2 ###reference_###) to find estimates for: (i) a reduced model order , (ii) initial state , and (iii) parameters A in Eq. (6 ###reference_###) that minimize the SPE in Eq. (5 ###reference_###).\nDetermination of a reduced model order in Problem 2 ###reference_blem2### for a given training data is a non-trivial task.\nOne approach consists of starting with the user-specified SSNN order, that is, , followed by gradually decreasing and re-training of SSNN until the SPE continues to be in an acceptable range. However, this trial-and-error approach requires re-training SSNN numerous times.\nAnother approach to solving Problem 2 ###reference_blem2### is by initializing an SSNN with the state vector dimension set equal to the user-defined order . In addition to minimizing the SPE, SSNNs can be trained to minimize the number of state variables which exhibit significant variation over the training data. Those states of the trained SSNN that do not vary significantly over the training data can be considered as redundant, thereby enabling determination of a reduced model order . Let the sample variance of the state of SSNN be defined by (see notation). An estimate of the reduced model order for SSNN is now defined below:\nFor the SSNN in Eq. (3 ###reference_###) trained using data Eq. (2 ###reference_###), and which results in state sample variances , a reduced model order of the SSNN is defined as the number of states variables whose sample variances exceed a threshold :\nA procedure for solving Problem 2 ###reference_blem2### can be obtained by solving the following:\nFor the SSNN in Eq. (3 ###reference_###), use training data Eq. (2 ###reference_###) to find estimates for: (i) a reduced model order , (ii) initial state , and (iii) parameters A in Eq. (6 ###reference_###), by solving:\nThe minimizer of this optimization problem yields a reduced model order , which is also being minimized in the objective. However, Problem 3 ###reference_blem3### is a mixed integer nonlinear programming problem due to the integer decision variables and which is difficult to solve. In the proposed SSNNO, Problem 3 ###reference_blem3### is simplified by enforcing variance-ordering of the state variables to identify the reduced model order as discussed next."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "State-Space Neural Network with Ordered variance (SSNNO)",
|
| 33 |
+
"text": "The proposed SSNNO solves Problem 2 ###reference_blem2### using state variance-based regularization.\nSSNNO takes the same form as SSNN (see Fig. 1 ###reference_###) with the main difference being in the training of SSNNO, which identifies parameters of the network along with a reduced model order, simultaneously.\nThe state and output equations of SSNNO are represented as:\nwhere is the state vector predicted using SSNNO, is the predicted output,\n are the state and output functions, each modeled as a (possibly deep) NN, with representing the corresponding weights and biases.\nThe state and output functions are represented with subnetworks evaluated as a composition of layer activation functions:\nwhere and are the number of layers in state and output subnetworks, respectively. Here are layers of subnetwork represented as:\nwhere\n contains the activation functions for the nodes in the layer with corresponding weight and bias parameters being and , respectively. The input to the layer is . Input to the first layer . Output of the layer of yield SSNNO states: . Similarly, are layers of subnetwork :\nwhere contains the activation functions for nodes in the layer, is the weight matrix, is the bias,\n with input to first layer of subnetwork being and output, .\nLet the predicted state and output sequences using SSNNO, namely and be:\nThe SPE for SSNNO can be written as:\nDenote the sample mean vector and sample covariance matrix for the predicted state vector sequence in Eq. (16 ###reference_###) as and , respectively.\nThe diagonal elements of correspond to sample variances of the predicted state variables: We can now define SSNNO as following:\nAn SSNNO is a state-space model of the form as in Eq. (12 ###reference_###) for which the sample variances of the predicted state sequences computed using training data inputs in Eq. (16 ###reference_###) satisfy:\nOne way to achieve variance ordering is to incorporate a variance regularization term in the loss function as discussed next."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.1",
|
| 37 |
+
"parent_section_id": "4",
|
| 38 |
+
"section_name": "SSNNO Training: Variance-based State Regularization",
|
| 39 |
+
"text": "The training of the proposed SSNNO must ensure that the\nstates are ordered by sample variances: the first state exhibits the highest variance for the training data and subsequent states are arranged in the order of decreasing variances.\nInformation of the variance order can be exploited to determine the reduced model order necessary by retaining only of the states with dominant sample variances as identified in Eq. (10 ###reference_###) and reabsorbing the states, which exhibit insignificant variances, as part of the SSNNO model parameters.\nDefine which contains the weight parameters for as well as the initial state :\nThe loss function is constructed by seeking to minimize a weighted sample variance and parameter regularization terms, in addition to the SPE as follows:\nwhere are hyperparameters.\nWeighted SPE is related to SPE in Eq. (17 ###reference_###) as: represents the novel sample variance regularization term, which involves a diagonal weighting matrix whose non-negative elements are arranged in an increasing order:\nIt is obvious that the ordering of the above weight elements enforces a decreasing or-\nder in the sample variances of the states of the trained SSNNO by noting that,\nThe weight regularization term avoids overfitting.\nThus, the training problem for SSNNO can be stated as:\nGiven training data Eq. (2 ###reference_###), find parameters in Eq. (19 ###reference_###) for the SSNNO Eq. (12 ###reference_###) as follows:\nThe point of departure between SSNN and SSNNO is the inclusion of objective in Eq. (20 ###reference_###). The trade-off between with ensures that only those states that play a significant role in minimizing the SPE are allowed significant sample variances. Thus any overparameterization of the number of states owing to the user-specified order is identified by the criterion in Eq. (10 ###reference_###) for determination of the reduced model order , as discussed in Section 10 ###reference_###.\nElements of specify parameters of trained subnetworks and which, in turn, yield the state-space model with the user-specified order as in Eq. (12 ###reference_###).\nThe optimization problem for\nSSNNO in Eq. (23 ###reference_###) is typically nonconvex. Consequently,\nsimilar to\nother dynamic NN models, convergence of the weights to the global optimum is not guaranteed and the solution depends on the initial guess.\nNext, we discuss the determination of the reduced model order and identify a reduced-order model from the trained SSNNO."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Reduced order SSNNO (R-SSNNO)",
|
| 45 |
+
"text": "Assume that an adequately trained SSNNO as discussed in the previous section is available. Thus, the predicted state variables are ordered in terms of decreasing variance: , which is achieved by a suitable choice of W Eq. (21 ###reference_###) employed in the state variance term in Eq. (22 ###reference_###).\nTo find a reduced model order as in Definition 1 ###reference_inition1###, the number of state variables in the trained SSNNO that exhibit significant variance needs to be determined. This is achieved by categorizing the states of the SSNNO into significant and residual variables as follows:\nFor a given tolerance and a trained SSNNO as per Problem 4 ###reference_blem4###, the state variable is called a significant variable, if\notherwise, it is called a residual variable. Thus, the states of the trained SSNNO are ordered as,\nwhere correspond to the significant and residual variables, respectively. Moreover, the sample mean and sample variance for the significant variables are and the for the residual variables are .\nThe reduced model order corresponds to the\ndimension of significant variables, that is, and the value of becomes apparent only after obtaining a trained SSNNO.\nWe now present an approach for finding a Reduced-order SSNNO (R-SSNNO) model with order directly from the trained SSNNO, without requiring any retraining.\nThis is achieved by suitably partitioning the parameters of the first and last layers of subnetwork of the trained SSNNO namely, based on the reduced model order :\nwhere\n\n\n\n\n, , . Similarly, partition parameters of the first layer of subnetwork namely,\n as follows:\nwhere \nUsing this and Eq. (13 ###reference_###), the state and output equations in Eq. (12 ###reference_###) are rewritten as:\nSince the sample variance of residual variables is small, that is, , these states do not exhibit variation over the training data, and may be approximated by their mean . Thus,\nThe above approximation simplifies Eq. (28 ###reference_###) to yield the R-SSNNO model of order :\nwhere , corresponds to the activation functions of the first nodes in the layer,\n is the output predicted using the R-SSNNO model, and . Note that the R-SSNNO results in model order reduction relative to SSNN when . The value of the reduced model order can be tuned by adjusting the tolerance suitably.\nThe algorithm for identification of a R-SSNNO model is summarized below:\nIn linear systems, model order reduction is obtained with the dominant eigenvalues of the system matrix and dynamics of the system in the subspace spanned by the corresponding eigenvectors. Similarly, R-SSNNO achieves model order reduction by obtaining the dynamics in a reduced order manifold where the state variance plays the role of eigenvalues in linear model order reduction.\nThe next section presents properties of the models obtained with the SSNNO and R-SSNNO."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5.1",
|
| 49 |
+
"parent_section_id": "5",
|
| 50 |
+
"section_name": "Properties of SSNNO and R-SSNNO",
|
| 51 |
+
"text": "Let denote the predicted output sequence of R-SSNNO Eq. (30 ###reference_###) with a reduced model order determined using the training data Eq. (2 ###reference_###). Then the output prediction error using the R-SSNNO is defined as:\nIf the variances of the residual state variables are zero, then the output predicted by the R-SSNNO model with order is identical to the output predicted by the SSNNO model with order implying that the model order reduction step does not result in degradation in SPE. This leads to the following Lemma,\nIf then the model ouputs of R-SSNNO and SSNNO are identical:\nSee Appendix A.2 ###reference_###.\n\u220e\nSimilar to the existence of SSNNs in Lemma 1 ###reference_ma1###, we proceed to prove existence of SSNNO.\nWe begin by noting that ordering would also be achieved in a trained SSNN by merely relabeling the state variables in terms of their variances, i.e., the one with the highest variance is named as the next one and so on. This can be achieved by rearranging the columns/rows of the weight and bias parameters for the state and output subnetworks after completion of training of the SSNN. This idea is used next to prove the existence of an SSNNO (and thereby R-SSNNO) with bounded prediction error which leads to the following Lemma,\nFor any nonlinear system Eq. (1 ###reference_###) that satisfies\nAssumptions LABEL:li and 2 ###reference_umption2###, and is subjected to bounded inputs,\nthere exists an SSNNO as in Definition 2 ###reference_inition2### with ordered variance of states, such that:\nfor all , , and\nSee Appendix A.3 ###reference_###.\n\u220e\nLemma 3 ###reference_ma3### guarantees the existence of an SSNNO where the variance ordering can be achieved in the state space without sacrificing the prediction accuracy. However, in the training of SSNNO in Problem 4 ###reference_blem4###, the prediction accuracy may be compromised, since a trade-off is sought between SPE and model order by adjusting the parameters and in Eq. (20 ###reference_###). The R-SSNNO model\ncan be used for designing data-driven state feedback-based control schemes such as MPC, LQR, adaptive control, etc. The next section presents a numerical implementation of the SSNNO on a CSTR example which includes the application of the SSNNO in MPC.\n###figure_2###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "6",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Simulation Results",
|
| 57 |
+
"text": "The proposed SSNNO is illustrated on a Continuous Stirred Tank Reactor (CSTR) system defined by the state equation in discrete time:\nwhere is the sampling period, is the discrete time instant, denotes continuous time, is the state function of CSTR in continuous time [50 ###reference_b50###]:\nwhere , are the reactant conversion and reactor temperature, is the reactor jacket temperature. Model parameters values are selected as \nThe temperature of the reactor is modeled as the output,\nwhere is the measurement noise, which is considered as Gaussian white noise with mean zero and standard deviation .\nIt is noted that the CSTR problem, with the true order , exhibits severe gain nonlinearity. The gain between the concentration and jacket temperature exhibits an 80-fold change between low and high reactor temperature conditions [6 ###reference_b6###]. The input-output data is generated by a forward simulation of Eqs. (34 ###reference_###)-(36 ###reference_###) over instants with the sampling period initial condition while the control input is chosen as a multi-level pseudo-random signal with amplitude in the range .\nThe first 500 samples of the dataset are used\nfor training the SSNNO and is denoted as while the remaining samples are used for\ntesting which is denoted as .\nThe SSNNO is trained using the training data where the loss function is chosen as in Eq. (20 ###reference_###) with and the activation function for the hidden layers are chosen as hyperbolic tangent (tanh) and the output layers as linear. The following parameter choices are made: , .\nTo demonstrate the effectiveness of SSNNO in determination of the reduced model order , despite of different user-specified orders , two cases with and are considered.\nThe weighting matrix Eq. (21 ###reference_###) is chosen as for and for \nThe unconstrained optimization problem in Eq. (23 ###reference_###) is solved for using the Quasi-Newton method [51 ###reference_b51###].\nFig. 2 ###reference_###a,b compare the system response with SSNNO with for the CSTR system for the training and testing inputs. It is interesting to note that the SSNNO is also able to capture the step response, indicating good performance for both transient and steady-state behaviours (see Fig. 2 ###reference_###c).\nTable 1 ###reference_### shows the variances of the state variables with SSNNO and SSNN (for which ) for . It should be noted that variances of the three states in SSNN are not ordered. Moreover, the variances are nonzero for all the three state variables. On the other hand, the variances of the SSNNO states are ordered, and the variance of the third state variable is deemed insignificant for , implying that the SSNNO is able to effectively determine an appropriate model order. Further, Table 1 ###reference_### shows the mean squared error between the given and predicted output for the training and testing data, denoted by and respectively.\nIn case of SSNNO with , similar comparisons are shown in Table 2 ###reference_###. It is noted that SSNNO achieves ordering of the states with the last two state variables being insignificant. This demonstrates that the reduced model order determination in SSNNO does not depend on the user-specified model order .\nFrom Tables 1 ###reference_### and 2 ###reference_###, it can be observed that the output prediction errors with SSNNO and SSNN are almost the same.\nThe extent of the model order reduction critically depends on the user-specified parameter . It is clear that a choice of would yield a SSNNO-R with only one state.\nThe state-space models as SSNNO for , and SSNNO-R with , and are reported below:\nThird-order model (SSNNO with ):\nSecond-order model (R-SSNNO with gives ):\nfor which the output prediction errors are obtained as and .\nFirst-order model (R-SSNNO with gives ):\nfor which the output prediction errors are obtained as\n and . Note that the output prediction errors obtained with the second-order and first-order models are almost same as the third-order model, indicating that a first order SSNNO-R model is a reasonable choice.\n###figure_3###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "6.1",
|
| 61 |
+
"parent_section_id": "6",
|
| 62 |
+
"section_name": "EKF based state space MPC design Example",
|
| 63 |
+
"text": "This section illustrates the use of the second-order SSNNO as the prediction model for MPC of the CSTR system. The cost function for the MPC is chosen as:\nwhere denotes the predicted state and control input at time instant but computed at time instant , and denotes the reference state and control inputs. The weighting matrices for state and control inputs are chosen as , , and the MPC horizon is chosen as Control constraints are imposed as with and \nSee [12 ###reference_b12###, 47 ###reference_b47###, 48 ###reference_b48###, 46 ###reference_b46###] for more details on MPC and data-driven control.\nThe output targets for the reactor temperature are chosen as for the first quadrant (first 25 instants), which is then reduced by 0.1 in each successive quadrant.\nThe state reference and control reference used in the MPC cost function Eq. (40 ###reference_###) are computed by solving the steady-state SSNNO-R model as follows:\nfor each of the reactor temperature targets. The MPC scheme uses states estimated using EKF as the current state from which the predicted state sequence is computed. The EKF uses the SSNNO-R model in Eq. (38 ###reference_###) to compute a filtered estimate of the current state which consists of two stages [49 ###reference_b49###]:\n1. Prediction:\n2. Update:\nwhere is the optimal Kalman gain, is the state covariance matrix,\nand are the covariance matrices for the disturbance and noise which are chosen as . The resultant scheme is denoted by SSNNO-EKF-MPC for which the closed-loop performance with the CSTR system is given in Fig. 3 ###reference_###.\nFig. 3 ###reference_###(a) shows the output response of CSTR with SSNNO-EKF-MPC in which denotes the output predicted by the SSNNO model and is the output with the first principle model (in Eq. (36 ###reference_###)) for the MPC control input shown in Fig. 3 ###reference_###(b). From Fig. 3 ###reference_###, it can be observed that\nthe output of the CSTR system follows the reference value with the proposed SSNNO-EKF-MPC scheme, and the control input satisfies the constraints. There is an initial overshoot in the output with the proposed scheme which is due to the plant-model mismatch which is corrected by the EKF."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "7",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Conclusions",
|
| 69 |
+
"text": "A state-space neural network with ordered variance is proposed as a means of identifying the model order based on training data. This results in the identification of a state-space model where the state variables are ordered in terms of decreasing variance. Further, the approach also identifies a Reduced order SSNNO (R-SSNNO) model from the trained SSNNO to predict the output with sufficient accuracy. The efficiency of the approach is illustrated using simulation on a CSTR system. The application of the proposed SSNNO in data-driven control and state estimation is presented. Future work involves the extension of the SSNNO for theoretical guarantees on variance ordering, stability, etc."
|
| 70 |
+
}
|
| 71 |
+
],
|
| 72 |
+
"appendix": [
|
| 73 |
+
{
|
| 74 |
+
"section_id": "Appendix 1",
|
| 75 |
+
"parent_section_id": null,
|
| 76 |
+
"section_name": "Appendix A Appendices",
|
| 77 |
+
"text": "The UATs [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###] ensure that for any and there exists and such that\nfor all and Using this and the Lipschitz inequality in Eq. (8 ###reference_###), the output error at instant can be bounded as:\nFurther, the state error term in the above equation can be bounded as:\nwhere given that the initial state estimate satisfies \nSubstituting Eq. (46 ###reference_###) in (45 ###reference_###) results in:\nNow, selecting and gives \nresults in \n\u220e\ngives \nSubstituting this in from Eq. (28 ###reference_###) gives:\nwhich implies \nSubstituting instead of in Eq. (31 ###reference_###) gives:\nThis completes the proof.\n\u220e\nAs per Lemma 1 ###reference_ma1###, for any , there exists an SSNN for which Next we construct an SSNNO from this SSNN which satisfies The SSNNO is constructed from the SSNN as below:\nThe output layer weight and bias in the state subnetwork of SSNNO is constructed by rearranging the rows of output layer weight and bias in the state subnetwork of SSNN (denoted by ) in terms of state variable variances, i.e., let has the largest variance in SSNN, then:\nfor to\nfor to\nThe first hidden layer weights in the state and output subnetworks of SSNNO are constructed by rearranging the columns of the corresponding weights in SSNN in terms of state variable variances, i.e., let has the largest variance in SSNN, then:\nfor to\nfor to\nThe layers 2 to in the state subnetwork of SSNNO are chosen the same as in SSNN.\nThe layers 2 to in the output subnetwork of SSNNO are chosen the same as in SSNN.\nNow, for the constructed SSNNO, the first state variables are ordered in terms of decreasing variance, and the remaining state variables have zero mean and variance. Therefore for all which gives (using Lemma 2 ###reference_ma2###).\nFurther, the predicted output with constructed SSNNO results in:\nwhich implies \nSubstituting instead of in Eq. (17 ###reference_###) gives:\nThis completes the proof.\n\u220e\nThe abbreviations used in the paper are listed in alphabetical order:\n###table_1###"
|
| 78 |
+
},
|
| 79 |
+
{
|
| 80 |
+
"section_id": "Appendix x1",
|
| 81 |
+
"parent_section_id": null,
|
| 82 |
+
"section_name": "Acknowledgments",
|
| 83 |
+
"text": "This research was supported by the Science and Engineering Research Board, Department of Science and Technology India, through grant number CRG/2022/002587."
|
| 84 |
+
}
|
| 85 |
+
],
|
| 86 |
+
"tables": {
|
| 87 |
+
"1": {
|
| 88 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Performance comparison for third-order model.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T1.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T1.5.6.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S6.T1.5.6.1.1\" style=\"padding-bottom:2.15277pt;\">Performance measure</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.5.6.1.2\" style=\"padding-bottom:2.15277pt;\">SSNNO</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.5.6.1.3\" style=\"padding-bottom:2.15277pt;\">SSNN</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_tt\" id=\"S6.T1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T1.1.1.2\">0.1379</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T1.1.1.3\">0.0029</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.2.2.2\">0.0002</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.2.2.3\">0.0010</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.3.3.2\">0.0000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.3.3.3\">0.0264</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T1.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.4.4.2\">0.0026</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.4.4.3\">0.0026</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S6.T1.5.5.1\" style=\"padding-bottom:4.30554pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T1.5.5.2\" style=\"padding-bottom:4.30554pt;\">0.0025</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T1.5.5.3\" style=\"padding-bottom:4.30554pt;\">0.0025</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 89 |
+
"capture": "Table 1: Performance comparison for third-order model."
|
| 90 |
+
},
|
| 91 |
+
"2": {
|
| 92 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance comparison for fourth-order model.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T2.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T2.6.7.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S6.T2.6.7.1.1\" style=\"padding-bottom:2.15277pt;\">Performance measure</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T2.6.7.1.2\" style=\"padding-bottom:2.15277pt;\">SSNNO</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T2.6.7.1.3\" style=\"padding-bottom:2.15277pt;\">SSNN</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T2.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_tt\" id=\"S6.T2.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T2.1.1.2\">0.1372</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T2.1.1.3\">0.0263</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.2.2.2\">0.0002</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.2.2.3\">0.0047</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T2.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.3.3.2\">0.0000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.3.3.3\">0.0446</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T2.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.2\">0.0000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.3\">0.0431</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S6.T2.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.5.5.2\">0.0026</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.5.5.3\">0.0025</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S6.T2.6.6.1\" style=\"padding-bottom:4.30554pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T2.6.6.2\" style=\"padding-bottom:4.30554pt;\">0.0046</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T2.6.6.3\" style=\"padding-bottom:4.30554pt;\">0.0049</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 93 |
+
"capture": "Table 2: Performance comparison for fourth-order model."
|
| 94 |
+
},
|
| 95 |
+
"3": {
|
| 96 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>List of Abbreviations</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A1.T3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.1.1.1\">AI</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.1.1.2\">Artificial Intelligence</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.2.2.1\">ANN</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.2.2.2\">Artificial Neural Network</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.3.3.1\">AE</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.3.3.2\">Autoencoder</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.4.4.1\">AEO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.4.4.2\">Autoencoder with Ordered variance</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.5.5.1\">CSTR</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.5.5.2\">Continuous Stirred Tank Reactor</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.6.6.1\">DL</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.6.6.2\">Deep Learning</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.7.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.7.7.1\">EKF</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.7.7.2\">Extended Kalman Filter</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.8.8.1\">LPV</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.8.8.2\">Linear Parameter Varying</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.9.9.1\">LQR</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.9.9.2\">Linear Quadratic Regulator</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.10.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.10.10.1\">ML</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.10.10.2\">Machine Learning</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.11.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.11.11.1\">MPC</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.11.11.2\">Model Predictive Control</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.12.12.1\">NN</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.12.12.2\">Neural Network</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.13.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.13.13.1\">PEM</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.13.13.2\">Prediction Error Method</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.14.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.14.14.1\">PINN</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.14.14.2\">Physics Informed Neural Network</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.15.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.15.15.1\">RNN</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.15.15.2\">Recurrent Neural Network</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.16.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.16.16.1\">R-SSNNO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.16.16.2\">Reduced order SSNNO</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.17.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.17.17.1\">SPE</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.17.17.2\">Squared Prediction Error</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.18.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.18.18.1\">SSNN</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.18.18.2\">State-Space Neural Network</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.19.19\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.19.19.1\">SSNNO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.19.19.2\">SSNN with Ordered variance</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.20.20\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.20.20.1\">UAT</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.20.20.2\">Universal Approximation Theorem</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 97 |
+
"capture": "Table 3: List of Abbreviations"
|
| 98 |
+
}
|
| 99 |
+
},
|
| 100 |
+
"image_paths": {
|
| 101 |
+
"1": {
|
| 102 |
+
"figure_path": "2406.10359v2_figure_1.png",
|
| 103 |
+
"caption": "Figure 1: SSNN Block diagram.",
|
| 104 |
+
"url": "http://arxiv.org/html/2406.10359v2/x1.png"
|
| 105 |
+
},
|
| 106 |
+
"2": {
|
| 107 |
+
"figure_path": "2406.10359v2_figure_2.png",
|
| 108 |
+
"caption": "Figure 2: Response of SSNNO for CSTR with white noise: (a) Training (b) Testing (c) Step test.",
|
| 109 |
+
"url": "http://arxiv.org/html/2406.10359v2/x2.png"
|
| 110 |
+
},
|
| 111 |
+
"3": {
|
| 112 |
+
"figure_path": "2406.10359v2_figure_3.png",
|
| 113 |
+
"caption": "Figure 3: CSTR with SSNNO-EKF-MPC (a) Output (b) Control input.",
|
| 114 |
+
"url": "http://arxiv.org/html/2406.10359v2/x3.png"
|
| 115 |
+
}
|
| 116 |
+
},
|
| 117 |
+
"validation": true,
|
| 118 |
+
"references": [],
|
| 119 |
+
"url": "http://arxiv.org/html/2406.10359v2"
|
| 120 |
+
}
|
20241217/2406.10984v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2406.11497v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2406.19525v2.json
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "An Energy Stable Incompressible Multi-Phase Flow Formulation",
|
| 3 |
+
"abstract": "We show that a reformulation of the governing equations for incompressible multi-phase flow in the volume of fluid setting leads to a well defined energy rate. New nonlinear inflow-outflow and solid wall boundary conditions bound the energy rate and lead to an energy estimate in terms of only external data. The new formulation combine perfectly with summation-by-parts operators and leads to provable energy stability.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Initial boundary value problems (IBVPs) for nonlinear flow problems including boundary conditions are notoriously difficult to bound.\nWe have previously [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###], reformulated the shallow water equations and the incompressible and compressible Euler and Navier-Stokes equations\nsuch that energy estimates\nwere obtained.\nWe also discretized the new formulations and arrived at provably stable nonlinear schemes [5 ###reference_b5###].\nIn this note we provide a theoretical background for energy stability of the IBVP for incompressible multi-phase liquid-gas flows in the volume-of-fluid (VOF) formulation [6 ###reference_b6###]. Specific modeling techniques for sharpening and diffusing the interface are for now, left to others [7 ###reference_b7###, 8 ###reference_b8###].\nThe VOF formulation is applicable to complex interface motions, it is mass conservative and tracks the interface\nby advecting the volume fraction of the target phase. Combined with a single liquid-gas velocity, a \"one-fluid\" formulation [9 ###reference_b9###, 10 ###reference_b10###] results.\nIn [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###], the original equations are similar, but the dependent variables and bounds differ and most importantly: boundary conditions are essentially ignored. Here we reformulate the one-fluid VOF equations into a new set of skew-symmetric equations and derive new boundary conditions that lead to an energy bound. By discretizing using summation-by-parts (SBP) operators, energy stability follows."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "The reformulation",
|
| 15 |
+
"text": "We consider an incompressible viscous liquid () and gas () mixture in two dimensions (2D)\n(with trivial extension to 3D). Using Einstein\u2019s summation convention, the classical one-fluid VOF formulation [6 ###reference_b6###] reads\nIn (2 ###reference_###), is the volume fraction of the liquid, is velocity in direction , is the stress tensor, is the viscous stress tensor and is pressure. Furthermore, and are the volume-averaged density and viscosity, respectively. The derivatives are denoted and , We neglected the external gravity forces which have no impact on stability. To get at an energy bound, the formulation (2 ###reference_###) will be modified in two steps.\nIn the first step we replace the volume fraction by the density as dependent variable\nand move the divergence relation to the righthand side leading to ( and ) the equivalent formulation\nIn the second step we aim for a scaling of the viscous terms and introduce the new variables (see [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###], for similar but not identical choices) into (2 ###reference_###) to yield the new equation set\nIntroducing , , , , and noting that vanish, we cast (2 ###reference_###) in the final matrix-vector form\nThe first step leading to (2 ###reference_###) produces the skew-symmetric lefthand side of (2.4 ###reference_###) suitable for Green\u2019s theorem. The second step leading to (2 ###reference_###) produces a scaling of the righthand side in (2.4 ###reference_###) again opening up for Green\u2019s theorem. Reformulating (2 ###reference_###) into (2.4 ###reference_###) simplifies the upcoming energy analysis."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Energy analysis",
|
| 21 |
+
"text": "Focusing first on the skew-symmetric lefthand side of (2.4 ###reference_###), we multiply with and integrate to get\nLetting and followed by Green\u2019s theorem yields\nIn (3.2 ###reference_###), , and are respectively the boundary, its surface element and outward pointing unit normal.\nFocusing secondly on the rescaled righthand-side of (2.4 ###reference_###), we see that the righthand side of (3.2 ###reference_###) implies\nwhere we again used Green\u2019s theorem.\nInserting (3 ###reference_###) into (3.2 ###reference_###) and rearranging leads to the energy rate\nwhere the term () provides dissipation and the boundary term is\nIn (3.5 ###reference_###), and .\nThe subscripts denote components normal and tangential to .\nFor the stress term in (3.5 ###reference_###) we use and the rotation matrix (given in (3.20 ###reference_### below) to obtain\n."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "The boundary conditions",
|
| 27 |
+
"text": "The unrestrained boundary term in (3.5 ###reference_###) can be written in vector-matrix-vector form as\nwhile and are given above.\nNext, we introduce the non-singular block rotation matrix as\nwhere the blocks are matrices. The rotation matrix (3.7 ###reference_###) introduced into (3.6 ###reference_###) yields\nWith matrix being non-singular, we cancel the off-diagonal matrices in (3.1 ###reference_###) with to get\nThe details in (3.1 ###reference_###) and (3.10 ###reference_###) reveal that five independent variables are involved in the boundary conditions.\nDiagonalizing the boundary term (3.6 ###reference_###) with standard eigenvalue techniques leads in general to very complex eigenvalues and eigenvectors, and hence complicated non-physical boundary conditions [15 ###reference_b15###]."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1.1",
|
| 31 |
+
"parent_section_id": "3.1",
|
| 32 |
+
"section_name": "3.1.1 Boundary conditions of inflow-outflow type with nonzero external data",
|
| 33 |
+
"text": "We start by determining the number of boundary conditions [16 ###reference_b16###] required at inflow-outflow boundaries.\nAt inflow , and we need three conditions since has three components.\nAt outflow , and we need two conditions since has two nonzero components.\nFollowing [5 ###reference_b5###], the boundary conditions are applied weakly by inserting (3.1 ###reference_###) and a lifting operator into (3.4 ###reference_###):\nHere denotes the boundary conditions, is a penalty matrix and is a lifting operator implementing boundary conditions weakly. It is defined by , where and are smooth vectors.\nWe will apply boundary conditions such that the boundary terms in (3.11 ###reference_###) are bounded by external data only.\nFollowing [3 ###reference_b3###], we consider the\nnonlinear characteristic-like boundary condition on weak form\nwhere and are functions of the solution and is external data.\nTo implement inflow boundary conditions weakly where , we need an operator so that\nAt an outflow boundary where we require an operator so that\nImplementing the boundary condition (3.12 ###reference_###) weakly into (3.11 ###reference_###) yield the augmented boundary term:\nNext we choose , where or depending on the signs of , which gives\nChoosing as well as adding and subtracting leads to an estimate in terms of data only since\nThe term related to in (3.15 ###reference_###) is positive, dissipative and requires no modification.\nIBVPs with estimates in terms of only external data are cited as strongly energy bounded [17 ###reference_b17###]."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1.2",
|
| 37 |
+
"parent_section_id": "3.1",
|
| 38 |
+
"section_name": "3.1.2 Boundary conditions of solid wall type with zero external data",
|
| 39 |
+
"text": "At solid wall boundaries, where we work directly on the in (3.6 ###reference_###) which we group as follows\nWe will remove the terms within brackets on the righthand-side of (3.18 ###reference_###) by appropriately selecting the penalty terms and . Only the two boundary conditions are available at a solid wall. For consistency reasons they must cancel both bracketed terms in (3.18 ###reference_###) when imposed weakly.\nFor the first term in brackets in (3.18 ###reference_###), we set with to obtain\nwhere\nis the rotation matrix used in (3.5 ###reference_###) above. Next, we insert and into (3.19 ###reference_###) to give\nEquating yields stability. Consistency is proven by noting that is forced to zero by ,\nsince\nFor the second term in brackets in (3.18 ###reference_###), we set with to obtain\nwhere , and\nSetting and into (3.22 ###reference_###) leads to,\nEquating yields stability. Consistency is proven by noting that forces and to zero, since\nThe new inflow-outflow and solid wall boundary conditions leads to a strongly energy bounded VOF formulation with bounds on density, velocities and volume fraction in terms of external data only."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "A stable straightforward nonlinear numerical approximation",
|
| 45 |
+
"text": "The focus in this short note is on the continuous analysis, but for clarity we now sketch how the continuous formulation can be mimicked discretely, leading to\na stable scheme.\nThe semi-discrete version of (2.4 ###reference_###) (ignoring boundary conditions) using summation-by-parts (SBP) operators [5 ###reference_b5###, 18 ###reference_b18###, 19 ###reference_b19###] can be written\nIn (4.1 ###reference_###), where denotes the Kronecker product, the vector approximates and the vector approximates in each node.\nThe matrix elements of and are matrices with node values of the matrix elements in and injected on the diagonal as exemplified below in matrix\nMoreover, , and where\n are 1D SBP difference operators.\nAll matrices have appropriate sizes such that the matrix-matrix and matrix-vector operations are defined.\nThe discrete energy method (multiply (4.1 ###reference_###) from the left with ) yields\nsince and commute with the diagonal symmetric positive definite integration operator . Using the notation , noting that only the symmetric part of remains, and after applying the SBP properties (see [5 ###reference_b5###, 18 ###reference_b18###, 19 ###reference_b19###] for details) we obtain the semi-discrete energy rate\nThe semi-discrete energy rate (4.4 ###reference_###) mimicks the continuous energy rate (3.4 ###reference_###) perfectly on a rectangular domain. The second term from the left in (4.4 ###reference_###) is the dissipation numerically integrated over the domain (using the volume integrator ). The next two terms, contain the boundary terms numerically integrated along the boundary (using the boundary integrators and )."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Summary and outlook",
|
| 51 |
+
"text": "A new formulation of the incompressible multi-phase liguid-gas flow equations that leads to a well defined energy rate was derived. It was complemented with new weak boundary conditions of inflow-outflow and solid wall types that lead to a unique result for nonlinear VOF formulations: energy bounds of density, velocities and volume fraction in terms of external data only.\nThe paper was concluded with a short illustration of how to construct a semi-discrete energy stable scheme by combining the new formulation with summation-by-parts operators. In future work we will develop nonlinear strongly energy stable schemes based on this new provably strongly energy bounded continuous formulation."
|
| 52 |
+
}
|
| 53 |
+
],
|
| 54 |
+
"appendix": [],
|
| 55 |
+
"tables": {},
|
| 56 |
+
"image_paths": {},
|
| 57 |
+
"validation": true,
|
| 58 |
+
"references": [],
|
| 59 |
+
"url": "http://arxiv.org/html/2406.19525v2"
|
| 60 |
+
}
|
20241217/2407.03384v3.json
ADDED
|
@@ -0,0 +1,383 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Topological Separation of Vortices",
|
| 3 |
+
"abstract": "Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flows.\nTraditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed \u201clayering\u201d. Upon extracting a vortical region using specified vortex criteria (e.g., ), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Related Work",
|
| 9 |
+
"text": "Vortex identification and separation techniques for turbulent flow analysis have seen significant advancements over the years, driven by the need to better understand complex flow phenomena. Early seminal work [9 ###reference_b9###] introduced the -criterion, a widely adopted method for identifying regions with high swirling motion in flows. Building upon this foundation, subsequent studies by [4 ###reference_b4###], [10 ###reference_b10###] and [12 ###reference_b12###] introduced , and criteria which furthered our understanding of vortical structures. These are region-based methods which typically require threshold values that can impact the size and extent of the extracted vortices.\nLine-based methods [15 ###reference_b15###, 19 ###reference_b19###, 17 ###reference_b17###] are used to extract vortex corelines around which fluid particles revolve. Vortex coreline methods are usually parameter free, yet they often yield fragmented lines, presenting a challenge in accurately categorizing a vortex into a specific type [22 ###reference_b22###]. These are local methods of vortex extraction that utilize the velocity vector at a point to calculate subsequent criteria. Additionally, global approaches such as Geometric methods [1 ###reference_b1###], Integration-based methods [21 ###reference_b21###, 19 ###reference_b19###], Objective methods [7 ###reference_b7###, 8 ###reference_b8###, 16 ###reference_b16###], and Feature level sets [13 ###reference_b13###] offer alternative solutions. These approaches leverage streamlines, pathlines or observe the attraction behavior of injected particles over time to identify vortices.\nTopological segmentation approaches based on contour-trees [3 ###reference_b3###] have been introduced to identify vortices [2 ###reference_b2###, 18 ###reference_b18###, 23 ###reference_b23###]. [2 ###reference_b2###] introduced a novel vortex detection technique based on topological analysis of a scalar indicator function. The method identifies seeds for potential vortices as local maxima/minima of the indicator function and optimizes a local threshold for each vortex using topological encoding by using a criterion called relevance. [18 ###reference_b18###] presented a visualization tool facilitating the comparison of two scalar fields using iso-surfaces, extracted via the largest contour segmentation of the scalar field. In a recent study, [23 ###reference_b23###] performed the contour-tree based segmentation of vortices. They first extract the vortical regions using an indicator function (e.g., ) and then separate the regions using progressive extraction of iso-surfaces. This ends up in a hierarchical tree representing the split/merge relation of vortices. However, their approach presents several limitations as mentioned in Topological Separation of Vortices. Vortices can undergo complex interactions, including merging, splitting, and stretching. Separating entangled vortices, particularly in turbulent flows, remains a challenging task."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Our Method",
|
| 15 |
+
"text": "In this section, we first examine the topology-based vortex separation process, then discuss mitigating the inaccurate splitting issue.\n###figure_1### ###figure_2###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Topology-based Vortex Separation",
|
| 21 |
+
"text": "Contour trees [3 ###reference_b3###] encode the merging and splitting relations of the level sets of scalar fields. For a set of points in 3D space and a scalar field , a level set is defined as . As the value of c changes, the level sets evolve, splitting and merging, which is encoded by split and join trees, respectively. The evolution of level sets occurs exclusively at topological critical points where . In the case of a join tree, the leaf nodes represent the minima () of , and as we increase the value of the level sets merge at the saddle () points of . Assuming a region with a simplified scalar field having only one , the smallest possible join tree can be represented as shown in Fig. 3 ###reference_###(b). The region can be segmented by assigning IDs to points based on their scalar value falling within one of three ranges corresponding to the three edge pairs of the join tree. The three edge pairs are (-), (-) and (-), where = Maximum. The segment seeds from the first critical point of the pair and continues to grow until the second critical point is reached, making the segments distinct from each other. This segmentation is depicted in Fig. 4 ###reference_###(c) using red, blue, and green colors, respectively.\n###figure_3### ###figure_4### ###figure_5### Given a minimal join tree, our goal is to separate the region into exactly two vortices. The segments corresponding to the - pairs belong to the vortices that are to be separated (blue and green in Fig. 4 ###reference_###(c)). We call them the seed segments. To achieve complete separation of the vortical region into exactly two vortices, one of the IDs from the seed segments must be assigned to the segment corresponding to the - pair (red in Fig. 4 ###reference_###(c)). We call this the query segment. While a strategy similar to [23 ###reference_b23###] could be employed to assign IDs to the query segment based on the Euclidean distance to the seed segment, it may suffer from the limitations discussed in Topological Separation of Vortices. To rectify this limitation, an accurate measure of distance is required, such as graph geodesic distance. However, employing graph geodesic distance in this scenario is computationally demanding, as it requires computing distances between each cell in the query segment and all cells in the seed segment to find the nearest seed segment. This is where our \u201clayering\u201d strategy comes in. Layering is visualized in Fig. 5 ###reference_### and works as follows:\nIdentify cells in the query segment that are immediate neighbors of the cells in the seed segments. We refer to this collection of cells as a layer.\nAssign IDs to the cells in the layer based on the ID of the closest neighboring cell in the seed segments.\nIterate through steps (1) and (2) until there are no cells remaining in the query segment.\n###figure_6### ###figure_7### ###figure_8### Critical point pairs of the minimal join tree are chosen based on the persistence [5 ###reference_b5###] of the pairs. For this purpose, we get the persistence diagram of the scalar field and choose a (-) and a (-) pair with the highest persistence as depicted in Fig. 3 ###reference_###(a). In this paper, we use -criterion as the scalar field but the method is equally valid for other region-based criterion such as [9 ###reference_b9###], [10 ###reference_b10###], [14 ###reference_b14###, 20 ###reference_b20###], etc.. Initially, we extract the vortical regions utilizing the region growing strategy from [23 ###reference_b23###] using criterion, then our vortex separation approach works as follows. For each disconnected region, we do the following:\nGet persistence diagram of the input scalar field for the region.\nChoose critical point pairs as depicted in Fig. 3 ###reference_###(a) with the highest persistence.\nSimplify the topology by removing all remaining critical points in the region based on the selected critical points.\nExtract the join tree and get the initial segmentation in the form of seeds and query segments.\nIf both seed segments have at least one cell, then do \u201clayering\u201d and split the region. Otherwise, go to (2) and pick a new - pair with lower persistence. Stop, if no - pairs are left.\nAutomatically check whether to avoid the split using Eq. 1 ###reference_###. If it needs to be avoided, go to step (2) and pick a new - pair with lower persistence. Otherwise, finalize the split and record the changes in the vortex hierarchy.\nFor each new region, do (1)\u2013(6) recursively."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Check and Avoid Inaccurate Splits",
|
| 27 |
+
"text": "To avoid inaccurate splits, we extract and utilize vorticity lines close to the boundary of the split. We move a few layers of cells away from the boundary towards the larger region and uniformly select as seeds for the vorticity lines. Our algorithm checks and decides whether the split should be avoided based on the following equation,\nwhere represents the percentage of cells overlapped by the vorticity lines in the smaller region. This examines if the vorticity lines\u2019 trend persists across the split boundary. If a large area is covered by the vorticity lines in the smaller region originating from the larger region, the trend is consistent, and the region should not be split, as shown in Fig. 6 ###reference_###(b). The reason behind selecting seeds close to, rather than precisely at, the boundary is exemplified in Fig. 6 ###reference_###(a). In this scenario, two vortices form a \u201dV\u201d shape as indicated by the arrows of the vectors. If the boundary points were utilized as seeds, the vorticity lines could extend into both regions, resulting in a high value of despite being an inaccurate split case. This discrepancy arises because the boundary doesn\u2019t precisely delineate vortices but rather marks the topological boundaries of vortical regions identified by contour tree-based segmentation and layering. We shift 5 layers of cells away from the boundary points towards the larger region and utilize their points as the seeds (more details in the supplemental). In the earlier levels of splitting, the region sizes are bigger and the boundary could have multiple interfaces. We let such regions split because the physics of the vorticity lines through multiple interfaces of the boundary is complex and a simple value such as doesn\u2019t suffice. Therefore we only use this strategy when the boundary has only one interface. Additionally, vorticity lines may not follow the vortex shape and may terminate prematurely, especially in weaker vortices. Our method fails to avoid splits in these cases. Thus, while our method significantly improves split accuracy, it cannot completely prevent inaccurate splits. We leave such complex scenarios for future work.\n###figure_9### ###figure_10###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Results",
|
| 33 |
+
"text": "Our method avoids the wrong ID assignment problem by employing the layering strategy (Section 2.1 ###reference_###). The seed segments expand uniformly until the region is completely separated (Fig. 5 ###reference_###), which indirectly approximates the graph geodesic distance. Critical points are leveraged for the separation process, where scalar values dynamically adapt to the current region, unlike the global thresholds utilized in the previous approach [23 ###reference_b23###]. By using critical points for separation, we address the issue of regions being inadequately covered by iso-surfaces or not being sufficiently split. Furthermore, we utilize seed segments instead of iso-surfaces for ID assignment, eliminating the need for an additional processing step to filter unnecessary iso-surface components. Finally, our vortex separation approach is adaptive. At each level, the region is split into exactly two segments based on the minimal join tree. The process continues until the specified stop condition is met. It eliminates the necessity to experiment with multiple values of VSF or rely on visual cues from the user. The only user-specified parameter required is the value of in Eq. 1 ###reference_### which does not introduce any of the issues outlined in Topological Separation of Vortices. In the following, we apply our method to turbulent flow datasets and showcase several instances where our algorithm demonstrates better results as compared to the previous approach [23 ###reference_b23###]."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Vortex Separation Results",
|
| 39 |
+
"text": "###figure_11### ###figure_12### ###figure_13### ###figure_14### In this section, we compare our results with the previous approach [23 ###reference_b23###] to illustrate the superiority of our vortex separation method. We concentrate on three primary areas of improvement, namely: (1) Adaptive Splits, (2) Avoiding Inaccurate Splits, and (3) Ensuring Sufficient Splits. Fig. 7 ###reference_### shows an example of a (one-legged) hairpin vortex from the Couette flow dataset [11 ###reference_b11###]. This hairpin vortex is a part of a cluster of vortices (Fig. 2 ###reference_###) found in the vicinity of a low-speed streak as mentioned in Sec. III(D) of [11 ###reference_b11###]. It is evident from the trend of the vorticity lines in Fig. 7 ###reference_### that our method effectively removes unwanted segments (blue, red, and white in Fig. 7 ###reference_###(b, d)) while preserving the desired segment (green in Fig. 7 ###reference_###(b, d)). This success is attributed to our utilization of local critical points for separation, as opposed to the global thresholds extracted using a statistical method, as was done in [23 ###reference_b23###], which failed to achieve the required separation, as illustrated in Fig. 7 ###reference_###(a).\n###figure_15### ###figure_16### We compare the inaccurate split results in Fig. 8 ###reference_###. Some obvious splits are highlighted in Fig. 8 ###reference_###(b) where vorticity lines show a clear trend continuation. Our method avoids such splits as shown in Fig. 8 ###reference_###(a). Although some inaccurate splits still occur, especially at the edges of the vortices as highlighted in Fig. 8 ###reference_###(a), they do not pose a risk of misclassifying a vortex and can be regarded as noise. For instance, as demonstrated in Fig. 6 ###reference_###(b), when a hairpin vortex splits inaccurately, it forms two streamwise vortices, hindering subsequent analysis aimed at identifying specific vortex populations within the flow. Furthermore, these inaccurate segments appear in regions where the strength of vorticity is notably low and the vorticity lines integrate out of the domain without following the shape of the vortex (blue vortex in Fig. 8 ###reference_###(a)).\n###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### In Fig. 9 ###reference_###, we share the results on -th subset of a single timestamp of the turbulent channel flow at a friction Reynolds number based on Johns Hopkins Turbulence Database (JHTDB) [6 ###reference_b6###] as shown in Fig. 9 ###reference_###(a). It can be seen in Fig. 9 ###reference_###(b\u2013d) that the separation results significantly differ based on different values of VSF parameter of [23 ###reference_b23###]. Some vortices are split insufficiently while others are inaccurately split as shown in the highlighted areas. It is hard to determine even with the user\u2019s visual analysis which split should be considered accurate. In contrast, our separation method is regulated by a less sensitive parameter (), making our results more uniform and robust as shown in Fig. 9 ###reference_###(e)."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Discussion and Future Work",
|
| 45 |
+
"text": "In this work, we presented a vortex separation technique based on scalar field critical points. In order to overcome several limitations of the previous method, we introduced the \u201clayering\u201d strategy and partially overcame the inaccurate split problem with the statistical inclusion of vorticity lines. Our algorithm\u2019s performance is slower compared to that of [23 ###reference_b23###], attributable to two main factors. Firstly, our method explores all critical point pairs within the underlying region for separation, which could be considerably higher in number compared to a global threshold selection approach. Secondly, the layering strategy introduces additional processing time. This trade-off between performance and accuracy suggests potential improvements by implementing criteria such as persistence to curtail the number of critical points considered. We also plan to further explore the interactions of vorticity lines when the split boundary is relatively complex having multiple interfaces."
|
| 46 |
+
}
|
| 47 |
+
],
|
| 48 |
+
"appendix": [],
|
| 49 |
+
"tables": {},
|
| 50 |
+
"image_paths": {
|
| 51 |
+
"1(a)": {
|
| 52 |
+
"figure_path": "2407.03384v3_figure_1(a).png",
|
| 53 |
+
"caption": "(a)\nFigure 1: (a) shows a vortical region (light-blue) extracted with the region growing [23] using \u03bb2subscript\ud835\udf062\\lambda_{2}italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT criterion. Variance in the patterns of vorticity lines (black) indicate the presence of multiple vortices. (b) shows a hairpin vortex (red) correctly getting separated from the rest of the vortices. (c) shows the zoomed in version of the hairpin vortex. (d) shows the inaccurate split of the hairpin vortex.",
|
| 54 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig1_1_a.png"
|
| 55 |
+
},
|
| 56 |
+
"1(b)": {
|
| 57 |
+
"figure_path": "2407.03384v3_figure_1(b).png",
|
| 58 |
+
"caption": "(b)\nFigure 1: (a) shows a vortical region (light-blue) extracted with the region growing [23] using \u03bb2subscript\ud835\udf062\\lambda_{2}italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT criterion. Variance in the patterns of vorticity lines (black) indicate the presence of multiple vortices. (b) shows a hairpin vortex (red) correctly getting separated from the rest of the vortices. (c) shows the zoomed in version of the hairpin vortex. (d) shows the inaccurate split of the hairpin vortex.",
|
| 59 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig1_1_b.png"
|
| 60 |
+
},
|
| 61 |
+
"1(c)": {
|
| 62 |
+
"figure_path": "2407.03384v3_figure_1(c).png",
|
| 63 |
+
"caption": "(c)\nFigure 1: (a) shows a vortical region (light-blue) extracted with the region growing [23] using \u03bb2subscript\ud835\udf062\\lambda_{2}italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT criterion. Variance in the patterns of vorticity lines (black) indicate the presence of multiple vortices. (b) shows a hairpin vortex (red) correctly getting separated from the rest of the vortices. (c) shows the zoomed in version of the hairpin vortex. (d) shows the inaccurate split of the hairpin vortex.",
|
| 64 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig1_1_c.png"
|
| 65 |
+
},
|
| 66 |
+
"1(d)": {
|
| 67 |
+
"figure_path": "2407.03384v3_figure_1(d).png",
|
| 68 |
+
"caption": "(d)\nFigure 1: (a) shows a vortical region (light-blue) extracted with the region growing [23] using \u03bb2subscript\ud835\udf062\\lambda_{2}italic_\u03bb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT criterion. Variance in the patterns of vorticity lines (black) indicate the presence of multiple vortices. (b) shows a hairpin vortex (red) correctly getting separated from the rest of the vortices. (c) shows the zoomed in version of the hairpin vortex. (d) shows the inaccurate split of the hairpin vortex.",
|
| 69 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig1_1_d.png"
|
| 70 |
+
},
|
| 71 |
+
"2(a)": {
|
| 72 |
+
"figure_path": "2407.03384v3_figure_2(a).png",
|
| 73 |
+
"caption": "(a)\nFigure 2: (a) shows the vortical region (light-blue) and the underlying iso-surface components (blue, red, green, etc.). (b) shows the assigned colors to the region\u2019s cells based on the Euclidean distance from the closest iso-surface component. The highlighted area shows the wrong color (blue) assigned to the cells of the red vortex.",
|
| 74 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig3_1_a.png"
|
| 75 |
+
},
|
| 76 |
+
"2(b)": {
|
| 77 |
+
"figure_path": "2407.03384v3_figure_2(b).png",
|
| 78 |
+
"caption": "(b)\nFigure 2: (a) shows the vortical region (light-blue) and the underlying iso-surface components (blue, red, green, etc.). (b) shows the assigned colors to the region\u2019s cells based on the Euclidean distance from the closest iso-surface component. The highlighted area shows the wrong color (blue) assigned to the cells of the red vortex.",
|
| 79 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig3_1_b.png"
|
| 80 |
+
},
|
| 81 |
+
"3(a)": {
|
| 82 |
+
"figure_path": "2407.03384v3_figure_3(a).png",
|
| 83 |
+
"caption": "(a)\nFigure 3: (a) shows the persistence diagram of two critical point pairs with the highest persistence. Here maximum(\ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M), saddle(\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S) and minimum(\ud835\udd2a\ud835\udd2a\\mathfrak{m}fraktur_m) points are represented by red, cyan and blue, respectively. (b) shows the corresponding minimal join tree of the chosen critical point pairs with one \ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M, one \ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S and two \ud835\udd2a\ud835\udd2a\\mathfrak{m}fraktur_m.",
|
| 84 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_3_b.png"
|
| 85 |
+
},
|
| 86 |
+
"3(b)": {
|
| 87 |
+
"figure_path": "2407.03384v3_figure_3(b).png",
|
| 88 |
+
"caption": "(b)\nFigure 3: (a) shows the persistence diagram of two critical point pairs with the highest persistence. Here maximum(\ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M), saddle(\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S) and minimum(\ud835\udd2a\ud835\udd2a\\mathfrak{m}fraktur_m) points are represented by red, cyan and blue, respectively. (b) shows the corresponding minimal join tree of the chosen critical point pairs with one \ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M, one \ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S and two \ud835\udd2a\ud835\udd2a\\mathfrak{m}fraktur_m.",
|
| 89 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_3_a.png"
|
| 90 |
+
},
|
| 91 |
+
"4(a)": {
|
| 92 |
+
"figure_path": "2407.03384v3_figure_4(a).png",
|
| 93 |
+
"caption": "(a)\nFigure 4: (a) shows a single region containing a streamwise and a horseshoe vortex, indicated by the vorticity line (black). (b) shows the join tree embedded within the region indicating the corresponding location of the critical points. (c) shows the segmentation of the region based on the join tree, where green and blue cells correspond to two \ud835\udd2a\ud835\udd2a\\mathfrak{m}fraktur_m-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S pairs, and the red cells correspond to a \ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S pair.",
|
| 94 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_1_b.png"
|
| 95 |
+
},
|
| 96 |
+
"4(b)": {
|
| 97 |
+
"figure_path": "2407.03384v3_figure_4(b).png",
|
| 98 |
+
"caption": "(b)\nFigure 4: (a) shows a single region containing a streamwise and a horseshoe vortex, indicated by the vorticity line (black). (b) shows the join tree embedded within the region indicating the corresponding location of the critical points. (c) shows the segmentation of the region based on the join tree, where green and blue cells correspond to two \ud835\udd2a\ud835\udd2a\\mathfrak{m}fraktur_m-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S pairs, and the red cells correspond to a \ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S pair.",
|
| 99 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_1_c.png"
|
| 100 |
+
},
|
| 101 |
+
"4(c)": {
|
| 102 |
+
"figure_path": "2407.03384v3_figure_4(c).png",
|
| 103 |
+
"caption": "(c)\nFigure 4: (a) shows a single region containing a streamwise and a horseshoe vortex, indicated by the vorticity line (black). (b) shows the join tree embedded within the region indicating the corresponding location of the critical points. (c) shows the segmentation of the region based on the join tree, where green and blue cells correspond to two \ud835\udd2a\ud835\udd2a\\mathfrak{m}fraktur_m-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S pairs, and the red cells correspond to a \ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S pair.",
|
| 104 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_1_d.png"
|
| 105 |
+
},
|
| 106 |
+
"5(a)": {
|
| 107 |
+
"figure_path": "2407.03384v3_figure_5(a).png",
|
| 108 |
+
"caption": "(a)\nFigure 5: (a) shows the initial segments obtained from the minimal join tree. IDs 00 and 1111 represent the seed segments corresponding to the (\ud835\udd2a1subscript\ud835\udd2a1\\mathfrak{m}_{1}fraktur_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S) and (\ud835\udd2a2subscript\ud835\udd2a2\\mathfrak{m}_{2}fraktur_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S) pairs, respectively. \u221211-1- 1 is the ID of the query segment corresponding to the (\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S-\ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M) pair. (b) displays the same region after several iterations of the layering process. It is evident that the seed segments have expanded, resulting in fewer cells remaining in the query segment. (c) demonstrates the region completely separated at the conclusion of the layering process.",
|
| 109 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_2_a.png"
|
| 110 |
+
},
|
| 111 |
+
"5(b)": {
|
| 112 |
+
"figure_path": "2407.03384v3_figure_5(b).png",
|
| 113 |
+
"caption": "(b)\nFigure 5: (a) shows the initial segments obtained from the minimal join tree. IDs 00 and 1111 represent the seed segments corresponding to the (\ud835\udd2a1subscript\ud835\udd2a1\\mathfrak{m}_{1}fraktur_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S) and (\ud835\udd2a2subscript\ud835\udd2a2\\mathfrak{m}_{2}fraktur_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S) pairs, respectively. \u221211-1- 1 is the ID of the query segment corresponding to the (\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S-\ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M) pair. (b) displays the same region after several iterations of the layering process. It is evident that the seed segments have expanded, resulting in fewer cells remaining in the query segment. (c) demonstrates the region completely separated at the conclusion of the layering process.",
|
| 114 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_2_b.png"
|
| 115 |
+
},
|
| 116 |
+
"5(c)": {
|
| 117 |
+
"figure_path": "2407.03384v3_figure_5(c).png",
|
| 118 |
+
"caption": "(c)\nFigure 5: (a) shows the initial segments obtained from the minimal join tree. IDs 00 and 1111 represent the seed segments corresponding to the (\ud835\udd2a1subscript\ud835\udd2a1\\mathfrak{m}_{1}fraktur_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S) and (\ud835\udd2a2subscript\ud835\udd2a2\\mathfrak{m}_{2}fraktur_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S) pairs, respectively. \u221211-1- 1 is the ID of the query segment corresponding to the (\ud835\udd4a\ud835\udd4a\\mathbb{S}blackboard_S-\ud835\udd44\ud835\udd44\\mathbb{M}blackboard_M) pair. (b) displays the same region after several iterations of the layering process. It is evident that the seed segments have expanded, resulting in fewer cells remaining in the query segment. (c) demonstrates the region completely separated at the conclusion of the layering process.",
|
| 119 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_2_c.png"
|
| 120 |
+
},
|
| 121 |
+
"6(a)": {
|
| 122 |
+
"figure_path": "2407.03384v3_figure_6(a).png",
|
| 123 |
+
"caption": "(a)\nFigure 6: This figure shows how the vorticity lines help avoid the inaccurate split problem. (a) shows a valid split of two vortices (blue and red) as R1subscript\ud835\udc451R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT value of the cells overlapped by the vorticity lines (black) is close to 0. (b) shows an inaccurate split where the R1subscript\ud835\udc451R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT value is close to 1 which subsequently is avoided. Arrows (yellow) show the direction of the v\u2062o\u2062r\u2062t\u2062i\u2062c\u2062i\u2062t\u2062y\ud835\udc63\ud835\udc5c\ud835\udc5f\ud835\udc61\ud835\udc56\ud835\udc50\ud835\udc56\ud835\udc61\ud835\udc66vorticityitalic_v italic_o italic_r italic_t italic_i italic_c italic_i italic_t italic_y vectors.",
|
| 124 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_4a.png"
|
| 125 |
+
},
|
| 126 |
+
"6(b)": {
|
| 127 |
+
"figure_path": "2407.03384v3_figure_6(b).png",
|
| 128 |
+
"caption": "(b)\nFigure 6: This figure shows how the vorticity lines help avoid the inaccurate split problem. (a) shows a valid split of two vortices (blue and red) as R1subscript\ud835\udc451R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT value of the cells overlapped by the vorticity lines (black) is close to 0. (b) shows an inaccurate split where the R1subscript\ud835\udc451R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT value is close to 1 which subsequently is avoided. Arrows (yellow) show the direction of the v\u2062o\u2062r\u2062t\u2062i\u2062c\u2062i\u2062t\u2062y\ud835\udc63\ud835\udc5c\ud835\udc5f\ud835\udc61\ud835\udc56\ud835\udc50\ud835\udc56\ud835\udc61\ud835\udc66vorticityitalic_v italic_o italic_r italic_t italic_i italic_c italic_i italic_t italic_y vectors.",
|
| 129 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig4_4b.png"
|
| 130 |
+
},
|
| 131 |
+
"7(a)": {
|
| 132 |
+
"figure_path": "2407.03384v3_figure_7(a).png",
|
| 133 |
+
"caption": "(a)\nFigure 7: (a) illustrates a hairpin vortex, evident from the strong positive and strong negative values of \u03c9y\u2032superscriptsubscript\ud835\udf14\ud835\udc66\u2032\\omega_{y}^{\\prime}italic_\u03c9 start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT in the head and leg of the hairpin vortex, respectively, as indicated by the color of the vorticity lines. This represents the finalized separation using the method in [23], denoted by the single color of the region. (b) depicts the same vortex with our method, showcasing individual separated segments in different colors. (c) presents the same vortex from a different angle, while (d) displays our results from the same angle as (c). It is evident that our vortex extraction method removes extra blobs (blue, red, white) while retaining the vortex of interest (green).",
|
| 134 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_3a.png"
|
| 135 |
+
},
|
| 136 |
+
"7(b)": {
|
| 137 |
+
"figure_path": "2407.03384v3_figure_7(b).png",
|
| 138 |
+
"caption": "(b)\nFigure 7: (a) illustrates a hairpin vortex, evident from the strong positive and strong negative values of \u03c9y\u2032superscriptsubscript\ud835\udf14\ud835\udc66\u2032\\omega_{y}^{\\prime}italic_\u03c9 start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT in the head and leg of the hairpin vortex, respectively, as indicated by the color of the vorticity lines. This represents the finalized separation using the method in [23], denoted by the single color of the region. (b) depicts the same vortex with our method, showcasing individual separated segments in different colors. (c) presents the same vortex from a different angle, while (d) displays our results from the same angle as (c). It is evident that our vortex extraction method removes extra blobs (blue, red, white) while retaining the vortex of interest (green).",
|
| 139 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_3b.png"
|
| 140 |
+
},
|
| 141 |
+
"7(c)": {
|
| 142 |
+
"figure_path": "2407.03384v3_figure_7(c).png",
|
| 143 |
+
"caption": "(c)\nFigure 7: (a) illustrates a hairpin vortex, evident from the strong positive and strong negative values of \u03c9y\u2032superscriptsubscript\ud835\udf14\ud835\udc66\u2032\\omega_{y}^{\\prime}italic_\u03c9 start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT in the head and leg of the hairpin vortex, respectively, as indicated by the color of the vorticity lines. This represents the finalized separation using the method in [23], denoted by the single color of the region. (b) depicts the same vortex with our method, showcasing individual separated segments in different colors. (c) presents the same vortex from a different angle, while (d) displays our results from the same angle as (c). It is evident that our vortex extraction method removes extra blobs (blue, red, white) while retaining the vortex of interest (green).",
|
| 144 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_3c.png"
|
| 145 |
+
},
|
| 146 |
+
"7(d)": {
|
| 147 |
+
"figure_path": "2407.03384v3_figure_7(d).png",
|
| 148 |
+
"caption": "(d)\nFigure 7: (a) illustrates a hairpin vortex, evident from the strong positive and strong negative values of \u03c9y\u2032superscriptsubscript\ud835\udf14\ud835\udc66\u2032\\omega_{y}^{\\prime}italic_\u03c9 start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT in the head and leg of the hairpin vortex, respectively, as indicated by the color of the vorticity lines. This represents the finalized separation using the method in [23], denoted by the single color of the region. (b) depicts the same vortex with our method, showcasing individual separated segments in different colors. (c) presents the same vortex from a different angle, while (d) displays our results from the same angle as (c). It is evident that our vortex extraction method removes extra blobs (blue, red, white) while retaining the vortex of interest (green).",
|
| 149 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_3d.png"
|
| 150 |
+
},
|
| 151 |
+
"8(a)": {
|
| 152 |
+
"figure_path": "2407.03384v3_figure_8(a).png",
|
| 153 |
+
"caption": "(a)\nFigure 8: (a) shows the finalized separation of a cluster of vortices using our method. Individual vortices are represented by different colors. (b) shows the finalized separation using the method in [23] where inaccurate splits are highlighted in circles. It can be clearly seen that our method demonstrates better results in avoiding inaccurate splits.",
|
| 154 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_4_a.png"
|
| 155 |
+
},
|
| 156 |
+
"8(b)": {
|
| 157 |
+
"figure_path": "2407.03384v3_figure_8(b).png",
|
| 158 |
+
"caption": "(b)\nFigure 8: (a) shows the finalized separation of a cluster of vortices using our method. Individual vortices are represented by different colors. (b) shows the finalized separation using the method in [23] where inaccurate splits are highlighted in circles. It can be clearly seen that our method demonstrates better results in avoiding inaccurate splits.",
|
| 159 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_4_b.png"
|
| 160 |
+
},
|
| 161 |
+
"9(a)": {
|
| 162 |
+
"figure_path": "2407.03384v3_figure_9(a).png",
|
| 163 |
+
"caption": "(a)\nFigure 9: (a) shows the vortices (blue) in a single timestamp of the dataset from [6]. For effective visualization, we only show the results for the section highlighted in a rectangle (white) in (a). Fig. (b), (c), and (d) show the separation results for [23] for VSF values of 1, 3, and 5 respectively. (e) shows the results of our separation method. Individual vortices are visualized with different colors.",
|
| 164 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_2a.png"
|
| 165 |
+
},
|
| 166 |
+
"9(b)": {
|
| 167 |
+
"figure_path": "2407.03384v3_figure_9(b).png",
|
| 168 |
+
"caption": "(b)\nFigure 9: (a) shows the vortices (blue) in a single timestamp of the dataset from [6]. For effective visualization, we only show the results for the section highlighted in a rectangle (white) in (a). Fig. (b), (c), and (d) show the separation results for [23] for VSF values of 1, 3, and 5 respectively. (e) shows the results of our separation method. Individual vortices are visualized with different colors.",
|
| 169 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_2b.png"
|
| 170 |
+
},
|
| 171 |
+
"9(c)": {
|
| 172 |
+
"figure_path": "2407.03384v3_figure_9(c).png",
|
| 173 |
+
"caption": "(c)\nFigure 9: (a) shows the vortices (blue) in a single timestamp of the dataset from [6]. For effective visualization, we only show the results for the section highlighted in a rectangle (white) in (a). Fig. (b), (c), and (d) show the separation results for [23] for VSF values of 1, 3, and 5 respectively. (e) shows the results of our separation method. Individual vortices are visualized with different colors.",
|
| 174 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_2c.png"
|
| 175 |
+
},
|
| 176 |
+
"9(d)": {
|
| 177 |
+
"figure_path": "2407.03384v3_figure_9(d).png",
|
| 178 |
+
"caption": "(d)\nFigure 9: (a) shows the vortices (blue) in a single timestamp of the dataset from [6]. For effective visualization, we only show the results for the section highlighted in a rectangle (white) in (a). Fig. (b), (c), and (d) show the separation results for [23] for VSF values of 1, 3, and 5 respectively. (e) shows the results of our separation method. Individual vortices are visualized with different colors.",
|
| 179 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_2d.png"
|
| 180 |
+
},
|
| 181 |
+
"9(e)": {
|
| 182 |
+
"figure_path": "2407.03384v3_figure_9(e).png",
|
| 183 |
+
"caption": "(e)\nFigure 9: (a) shows the vortices (blue) in a single timestamp of the dataset from [6]. For effective visualization, we only show the results for the section highlighted in a rectangle (white) in (a). Fig. (b), (c), and (d) show the separation results for [23] for VSF values of 1, 3, and 5 respectively. (e) shows the results of our separation method. Individual vortices are visualized with different colors.",
|
| 184 |
+
"url": "http://arxiv.org/html/2407.03384v3/extracted/6077595/figures/fig5_2e.png"
|
| 185 |
+
}
|
| 186 |
+
},
|
| 187 |
+
"validation": true,
|
| 188 |
+
"references": [
|
| 189 |
+
{
|
| 190 |
+
"1": {
|
| 191 |
+
"title": "Detection, quantification, and tracking of vortices using streamline geometry.",
|
| 192 |
+
"author": "I. Ari Sadarjoen and F. H. Post.",
|
| 193 |
+
"venue": "Computers & Graphics, 24(3):333\u2013341, 2000. doi: 10\u2006.\u20061016/S0097-8493(00)00029-7",
|
| 194 |
+
"url": "https://doi.org/https://doi.org/10.1016/S0097-8493(00)00029-7"
|
| 195 |
+
}
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"2": {
|
| 199 |
+
"title": "Identifying turbulent structures through topological segmentation.",
|
| 200 |
+
"author": "P.-T. Bremer, A. Gruber, J. Bennett, A. Gyulassy, H. Kolla, J. Chen, and R. Grout.",
|
| 201 |
+
"venue": "Communications in Applied Mathematics and Computational Science, 11(1):37\u201353, 2016. doi: 10\u2006.\u20062140/camcos\u2006.\u20062016\u2006.\u200611\u2006.\u200637",
|
| 202 |
+
"url": "https://doi.org/10.2140/camcos.2016.11.37"
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"3": {
|
| 207 |
+
"title": "Computing contour trees in all dimensions.",
|
| 208 |
+
"author": "H. Carr, J. Snoeyink, and U. Axen.",
|
| 209 |
+
"venue": "Computational Geometry, 24(2):75\u201394, 2003.",
|
| 210 |
+
"url": "https://doi.org/https://doi.org/10.1016/S0925-7721(02)00093-7"
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"4": {
|
| 215 |
+
"title": "A general classification of three\u2010dimensional flow fields.",
|
| 216 |
+
"author": "M. S. Chong, A. E. Perry, and B. J. Cantwell.",
|
| 217 |
+
"venue": "Physics of Fluids A: Fluid Dynamics, 2(5):765\u2013777, 05 1990. doi: 10\u2006.\u20061063/1\u2006.\u2006857730",
|
| 218 |
+
"url": "https://doi.org/10.1063/1.857730"
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"5": {
|
| 223 |
+
"title": "Computational topology: an introduction.",
|
| 224 |
+
"author": "H. Edelsbrunner and J. L. Harer.",
|
| 225 |
+
"venue": "American Mathematical Society, 2010.",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"6": {
|
| 231 |
+
"title": "A web services accessible database of turbulent channel flow and its use for testing a new integral wall model for les.",
|
| 232 |
+
"author": "J. Graham, K. Kanov, X. Yang, M. Lee, N. Malaya, C. Lalescu, R. Burns, G. Eyink, A. Szalay, R. Moser, et al.",
|
| 233 |
+
"venue": "Journal of Turbulence, 17(2):181\u2013215, 2016. doi: 10\u2006.\u20067281/T10K26QW",
|
| 234 |
+
"url": "https://doi.org/10.7281/T10K26QW"
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"7": {
|
| 239 |
+
"title": "An objective definition of a vortex.",
|
| 240 |
+
"author": "G. HALLER.",
|
| 241 |
+
"venue": "Journal of Fluid Mechanics, 525:1\u201326, 2005. doi: 10\u2006.\u20061017/S0022112004002526",
|
| 242 |
+
"url": "https://doi.org/10.1017/S0022112004002526"
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"8": {
|
| 247 |
+
"title": "Lagrangian coherent structures.",
|
| 248 |
+
"author": "G. Haller.",
|
| 249 |
+
"venue": "Annual Review of Fluid Mechanics, 47(1):137\u2013162, 2015. doi: 10\u2006.\u20061146/annurev-fluid-010313-141322",
|
| 250 |
+
"url": "https://doi.org/10.1146/annurev-fluid-010313-141322"
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"9": {
|
| 255 |
+
"title": "Vorticity and vortex dynamics in complex turbulent flows.",
|
| 256 |
+
"author": "J. Hunt.",
|
| 257 |
+
"venue": "Transactions of the Canadian Society for Mechanical Engineering, 11(1):21\u201335, 1987. doi: 10\u2006.\u20061139/tcsme-1987-0004",
|
| 258 |
+
"url": "https://doi.org/10.1139/tcsme-1987-0004"
|
| 259 |
+
}
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"10": {
|
| 263 |
+
"title": "On the identification of a vortex.",
|
| 264 |
+
"author": "J. Jeong and F. Hussain.",
|
| 265 |
+
"venue": "Journal of Fluid Mechanics, 285:69\u201394, 1995. doi: 10\u2006.\u20061017/S0022112095000462",
|
| 266 |
+
"url": "https://doi.org/https://doi.org/10.1017/S0022112095000462"
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"11": {
|
| 271 |
+
"title": "Direct numerical simulation and statistical analysis of stress-driven turbulent Couette flow with a free-slip boundary.",
|
| 272 |
+
"author": "M. Li and D. Yang.",
|
| 273 |
+
"venue": "Physics of Fluids, 31(8), 08 2019.",
|
| 274 |
+
"url": "https://doi.org/10.1063/1.5099650"
|
| 275 |
+
}
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"12": {
|
| 279 |
+
"title": "Rortex\u2014A new vortex vector definition and vorticity tensor and vector decompositions.",
|
| 280 |
+
"author": "C. Liu, Y. Gao, S. Tian, and X. Dong.",
|
| 281 |
+
"venue": "Physics of Fluids, 30(3):035103, 03 2018. doi: 10\u2006.\u20061063/1\u2006.\u20065023001",
|
| 282 |
+
"url": "https://doi.org/10.1063/1.5023001"
|
| 283 |
+
}
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"13": {
|
| 287 |
+
"title": "A visualization framework for multi-scale coherent structures in taylor-couette turbulence.",
|
| 288 |
+
"author": "D. B. Nguyen, R. O. Monico, and G. Chen.",
|
| 289 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, 27(2):902\u2013912, 2021. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062020\u2006.\u20063028892",
|
| 290 |
+
"url": "https://doi.org/10.1109/TVCG.2020.3028892"
|
| 291 |
+
}
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"14": {
|
| 295 |
+
"title": "Horizontal dispersion of floatable particles in the vicinity of velocity singularities such as convergences.",
|
| 296 |
+
"author": "A. Okubo.",
|
| 297 |
+
"venue": "Deep Sea Research and Oceanographic Abstracts, 17(3):445\u2013454, 1970. doi: 10\u2006.\u20061016/0011-7471(70)90059-8",
|
| 298 |
+
"url": "https://doi.org/https://doi.org/10.1016/0011-7471(70)90059-8"
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"15": {
|
| 303 |
+
"title": "The \u201cparallel vectors\u201d operator: a vector field visualization primitive.",
|
| 304 |
+
"author": "R. Peikert and M. Roth.",
|
| 305 |
+
"venue": "In VIS \u201999: Proceedings of the conference on Visualization \u201999, pp. 263\u2013270. IEEE Computer Society Press, Los Alamitos, CA, USA, 1999. doi: 10\u2006.\u20061109/VISUAL\u2006.\u20061999\u2006.\u2006809896",
|
| 306 |
+
"url": "https://doi.org/10.1109/VISUAL.1999.809896"
|
| 307 |
+
}
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"16": {
|
| 311 |
+
"title": "Visualization tools for vorticity transport analysis in incompressible flow.",
|
| 312 |
+
"author": "F. Sadlo, R. Peikert, and M. Sick.",
|
| 313 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, 12(5):949\u2013956, 2006. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062006\u2006.\u2006199",
|
| 314 |
+
"url": "https://doi.org/10.1109/TVCG.2006.199"
|
| 315 |
+
}
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"17": {
|
| 319 |
+
"title": "Topology-preserving 2-based vortex core line detection for flow visualization.",
|
| 320 |
+
"author": "T. Schafhitzel, J. E. Vollrath, J. P. Gois, D. Weiskopf, A. Castelo, and T. Ertl.",
|
| 321 |
+
"venue": "Computer Graphics Forum, 27(3):1023\u20131030, 2008. doi: 10\u2006.\u20061111/j\u2006.\u20061467-8659\u2006.\u20062008\u2006.\u200601238\u2006.\u2006x",
|
| 322 |
+
"url": "https://doi.org/https://doi.org/10.1111/j.1467-8659.2008.01238.x"
|
| 323 |
+
}
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"18": {
|
| 327 |
+
"title": "Interactive comparison of scalar fields based on largest contours with applications to flow visualization.",
|
| 328 |
+
"author": "D. Schneider, A. Wiebel, H. Carr, M. Hlawitschka, and G. Scheuermann.",
|
| 329 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, 14(6):1475\u20131482, 2008. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062008\u2006.\u2006143",
|
| 330 |
+
"url": "https://doi.org/10.1109/TVCG.2008.143"
|
| 331 |
+
}
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"19": {
|
| 335 |
+
"title": "Streak lines as tangent curves of a derived vector field.",
|
| 336 |
+
"author": "T. Weinkauf and H. Theisel.",
|
| 337 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, 16(6):1225\u20131234, 2010. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062010\u2006.\u2006198",
|
| 338 |
+
"url": "https://doi.org/10.1109/TVCG.2010.198"
|
| 339 |
+
}
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"20": {
|
| 343 |
+
"title": "The dynamics of enstrophy transfer in two-dimensional hydrodynamics.",
|
| 344 |
+
"author": "J. Weiss.",
|
| 345 |
+
"venue": "Physica D: Nonlinear Phenomena, 48(2):273\u2013294, 1991. doi: 10\u2006.\u20061016/0167-2789(91)90088-Q",
|
| 346 |
+
"url": "https://doi.org/https://doi.org/10.1016/0167-2789(91)90088-Q"
|
| 347 |
+
}
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"21": {
|
| 351 |
+
"title": "Topological flow structures in a mathematical model for rotation-mediated cell aggregation.",
|
| 352 |
+
"author": "A. Wiebel, R. Chan, C. Wolf, A. Robitzki, A. Stevens, and G. Scheuermann.",
|
| 353 |
+
"venue": "In Topological Methods in Data Analysis and Visualization: Theory, Algorithms, and Applications, pp. 193\u2013204. Springer, Berlin, Heidelberg, 2011. doi: 10\u2006.\u20061007/978-3-642-15014-2_16",
|
| 354 |
+
"url": "https://doi.org/10.1007/978-3-642-15014-2_16"
|
| 355 |
+
}
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"22": {
|
| 359 |
+
"title": "Hairpin vortex identification using template fitting on vortex corelines.",
|
| 360 |
+
"author": "A. Zafar and G. Chen.",
|
| 361 |
+
"venue": "[Poster presented at IEEE Visualization 2022].",
|
| 362 |
+
"url": "https://ieeevis.b-cdn.net/vis_2022/posters/v-vis-posters-1059.pdf"
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"23": {
|
| 367 |
+
"title": "Extract and characterize hairpin vortices in turbulent flows.",
|
| 368 |
+
"author": "A. Zafar, D. Yang, and G. Chen.",
|
| 369 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, 30(1):716\u2013726, 2024. doi: 10\u2006.\u20061109/TVCG\u2006.\u20062023\u2006.\u20063326603",
|
| 370 |
+
"url": "https://doi.org/10.1109/TVCG.2023.3326603"
|
| 371 |
+
}
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"24": {
|
| 375 |
+
"title": "Mechanisms for generating coherent packets of hairpin vortices in channel flow.",
|
| 376 |
+
"author": "J. ZHOU, R. J. ADRIAN, S. BALACHANDAR, and T. M. KENDALL.",
|
| 377 |
+
"venue": "Journal of Fluid Mechanics, 387:353\u2013396, 1999. doi: 10\u2006.\u20061017/S002211209900467X",
|
| 378 |
+
"url": "https://doi.org/10.1017/S002211209900467X"
|
| 379 |
+
}
|
| 380 |
+
}
|
| 381 |
+
],
|
| 382 |
+
"url": "http://arxiv.org/html/2407.03384v3"
|
| 383 |
+
}
|
20241217/2407.04368v2.json
ADDED
|
@@ -0,0 +1,444 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Romanization Encoding for Multilingual ASR",
|
| 3 |
+
"abstract": "We introduce romanization encoding for script-heavy languages to optimize multilingual and code-switching Automatic Speech Recognition (ASR) systems. By adopting romanization encoding alongside a balanced concatenated tokenizer within a FastConformer-RNNT framework equipped with a Roman2Char module, we significantly reduce vocabulary and output dimensions, enabling larger training batches and reduced memory consumption. Our method decouples acoustic modeling and language modeling, enhancing the flexibility and adaptability of the system. In our study, applying this method to Mandarin-English ASR resulted in a remarkable 63.51% vocabulary reduction and notable performance gains of 13.72% and 15.03% on SEAME code-switching benchmarks. Ablation studies on Mandarin-Korean and Mandarin-Japanese highlight our method\u2019s strong capability to address the complexities of other script-heavy languages, paving the way for more versatile and effective multilingual ASR systems.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Multilingual Automatic Speech Recognition (ASR) systems are designed to recognize and transcribe speech in multiple languages.\nCode-switching (CS) is a special case of this, dealing with speech that switches between two or more languages within a single utterance or conversation.\nWhile emerging cutting-edge web-scale large speech models such as [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###] demonstrate magnificent performance on multilingual ASR, they still fall short in CS scenarios [4 ###reference_b4###], often due to a lack of natural CS data for training.\nThis scarcity hinders the ability of both general large speech models and specialized CS ASR systems to effectively learn and integrate acoustic and linguistic information [5 ###reference_b5###].\nPart of the challenge of multilingual and CS ASR arises from text representations of languages from different language families.\nLanguages like those in the Indo-European family usually use a Latin-based alphabet with relatively smaller character sets.\nThese can be efficiently represented using methods like byte-pair encoding (BPE), which breaks down words into smaller pieces or sub-words. Research has shown that using sub-words can lead to better performance in language processing tasks [6 ###reference_b6###, 7 ###reference_b7###].\nHowever, languages such as Mandarin, Korean, and Japanese have a much larger set of unique characters, making sub-word representation less practical. While there are methods to break these characters into smaller units (like love in Mandarin\u2019s\n{CJK*}UTF8gbsn\u7231\u60c5 \u00e0i q\u00edng with Pinyin and Korean\u2019s {CJK*}UTF8mj\uc0ac\ub791 \u3145\u314f \u3139\u314f\u3147 with Jamo) and group these characters into sub-units (i.e. \u00e0i q\u00e0ng \u00e0iq\u00e0ng with segmentation), using these phonetic and semantic representations may not always yield the best results [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###].\nDespite the effectiveness of character-based approaches for individual languages, their integration into multilingual models is challenging. For instance, [11 ###reference_b11###] documents the use of 8k characters for Mandarin, 4k for Japanese, and 2k for Korean, alongside a standardized set of 512 sub-words per language for other languages. This approach yields 11k unique tokens for these three languages alone, leading to a significantly large and potentially imbalanced vocabulary that inflates the model\u2019s output dimension.\nEffectively encoding languages with unique scripts is crucial for multilingual and CS ASR models.\nMany non-Latin languages can be transcribed into the Latin alphabet through romanization.\nFor instance, pinyin, the primary romanization system for Standard Chinese, facilitates a mapping where a single character can be represented by different pinyins with tones representing the pronunciation.\nTypically, around 1,000 distinct pinyins with tones can represent about 5,000 Chinese characters. While romanization doesn\u2019t provide a strict one-to-one match, it effectively reduces the vocabulary size and allows the encoder to focus on learning acoustic modeling.\nWe propose separating acoustic and language modeling in multilingual and CS ASR models,\nusing romanization to reduce vocabulary size and speed up training and inference, aiming to improve model performance and adaptability.\nThis approach enhances system flexibility and allows for the use of advanced decoders like Large Language Models (LLMs) for efficient conversion. With this approach, we can utilize synthetic text data for easy fine-tuning to address the shortage of CS audio data.\nIn this paper, we make the following contributions:\nRomanization is investigated to be served as encoding method in multilingual and CS ASR tasks. We apply our encoding method with a balanced concatenated tokenizer to FastConformer-RNNT with a Roman2Char decoder without introducing additional modules such as Language Modeling (LM).\nExperiments on Mandarin-English CS data show that our model significantly reduces vocabulary size and the dimensions of the output layer, supports larger training batches, lowers memory consumption, and achieves promising outcomes. We release the checkpoints and implementations in NeMo1.\nOur ablation studies on Mandarin-Korean and Mandarin-Japaneses multilingual data demonstrate the effectiveness and generalizability of the proposed method."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related work",
|
| 15 |
+
"text": "Beyond the method described in [11 ###reference_b11###], OpenAI\u2019s Whisper [1 ###reference_b1###] system uses Byte-level Byte Pair Encoding (BBPE) [12 ###reference_b12###] for text tokenization, proving effective across various applications.\nHowever, it faces challenges with languages that have unique scripts or significantly differ from the Indo-European family, like Hebrew, Chinese, and Korean, primarily due to BBPE\u2019s limitations in handling distinct scripts or linguistic structures.\nResearch noted in [13 ###reference_b13###] indicates that BBPE can lead to higher deletion rates in bilingual End-to-End (E2E) ASR systems due to invalid byte sequences, and these BBPE-based bilingual systems underperform compared to their monolingual counterparts.\nGoogle USM [2 ###reference_b2###]\u2019s approach with word-piece models (WPMs) also struggles with script diversity, resulting in large output layers and difficulties in scaling.\nConversely, for complex-script languages like Chinese, substituting characters with Pinyin for text encoding in Natural Language Processing (NLP) and ASR tasks typically offers greater efficiency and robustness compared to processing each character individually as demonstrated in\n[14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###].\nRomanization Encoding has been studied in both NLP and speech processing fields. The uroman tool, introduced by [18 ###reference_b18###], converts texts to Latin-scripts, aiming for phonetic representation.\nThis work has been applied in multilingual pretrained language models [19 ###reference_b19###] to enhance the low-resourced languages.\nUroman is also utilized for pretraining the speech processing system in [3 ###reference_b3###] as additional forced alignment to tokenize texts.\nUroman\u2019s unidirectional nature poses a challenge for ASR tasks that require original script output and an additional deromanization step.\nVarious languages have multiple romanization methods.\nWhile uroman is universal, our focus is on the most popular Romanization systems for each language studied: Pinyin for Chinese, Revised Romanization for Korean, and Hepburn Romanization for Japanese.\nThis approach aims to preserve maximum phonetic and linguistic information and avoid unnecessary transformations, such as uroman converting digital numbers in various scripts to Western Arabic numerals.\nIn addition, as phonological distinctions might be lost during the romanization process making deromanization more difficult [20 ###reference_b20###]. Thus it is more feasible to unify the romanization and deromanization procedure in an end-to-end fashion.\nResearchers have explored specialized model architectures [21 ###reference_b21###, 22 ###reference_b22###] for code-switching tasks to better capture language-specific information.\nThis includes integrating Language Identification (LID) [23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###] enhancement strategies and leveraging pre-trained models alongside LM based Beam Search [17 ###reference_b17###] during inference to boost performance.\nDespite these advancements, the evaluation of these models on monolingual test sets often goes unexamined, and the volume of training data available is typically constrained. This situation is largely attributed to the scarcity of CS data.\nTo enhance the data and domain scope for CS ASR training, methods such as transfer learning from monolingual ASR to initialize encoders with both monolingual and code-switched datasets have been implemented in [23 ###reference_b23###].\nAdditionally, efforts including synthetic text data generation have been explored to further augment the training resources [26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###].\nTo our knowledge, only one publicly accessible checkpoint2 for Mandarin-English CS exists and it was solely trained on CS dataset SEAME [29 ###reference_b29###].\n22footnotetext: https://huggingface.co/espnet/vectominist_seame_asr_conformer_bpe5626 ###reference_seame_asr_conformer_bpe5626###\n###figure_1###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Method",
|
| 21 |
+
"text": ""
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Model structure",
|
| 27 |
+
"text": "Our model structure is illustrated in Figure 1 ###reference_###, where we take a ZH-EN CS as an example.\nIn this work, we employ the Fast Conformer model [30 ###reference_b30###] combined with Recurrent Neural Network Transducer (RNNT) technology [31 ###reference_b31###] as our baseline.\nThe Fast Conformer is an optimized version of the original Conformer model.\nIt features a new downsampling schema that significantly reduces computational requirements by approximately 2.9 times and enhances inference speed, while maintaining or even improving performance across various Speech and NLP tasks.\nRNNT excels in capturing sequence knowledge and is popular in both monolingual [32 ###reference_b32###] and CS ASR [25 ###reference_b25###] tasks.\nHowever, despite RNNT\u2019s strengths, it tends to struggle with script-heavy languages that have large vocabularies, as it can severely limit batch sizes and slow down training\n[33 ###reference_b33###].\nThe baseline, highlighted by a dashed rectangle inputs ZH characters and EN words. In contrast, our method replaces characters with Romanized text (Pinyin) before processing through a concatenated tokenizer for text representations (detailed in Section 3.2 ###reference_###). Meanwhile the audio representation is learned by FastConformer encoders. We introduce another decoder that transcribes the romanized text to characters (described in Section 3.3 ###reference_###)."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Roman and BPE concatenated tokenizer",
|
| 33 |
+
"text": "Following the concatenated tokenizer approach introduced by [34 ###reference_b34###], we leverage pre-trained monolingual tokenizers to construct a combined tokenizer.\nThis setup ensures separate label spaces for each language, enabling good generalization capabilities.\nFor instance, English tokens are assigned indices ranging from [0,1023], while Mandarin tokens are allocated indices starting from 1024 up to 1024 + vocab size.\nThe aggregated approach, while achieving similar performance levels to the non-aggregated method, offers the added advantage of facilitating LID. It does so by providing pseudo language information during training, thereby enabling the model to learn LID representations internally.\nWe prepare 1,024 English BPE sub-word units using the LibriSpeech (LS) dataset [35 ###reference_b35###] and roughly 5,000 Chinese characters from the AISHELL-2 (AS2) dataset [36 ###reference_b36###].\nThese Chinese characters were then romanized into Latin characters referred to here as \u2018Roman\u2019 encoding for all languages using the PyPinyin3 toolkit.\nAs seen in Table 1 ###reference_###, this romanization process reduced the Chinese vocabulary size from 5,178 to 1,239, enabling us to create a balanced concatenated tokenizer for Mandarin and English.\nFor Korean and Japanese, similar processing methods were employed using the kroman4 and pykakasi5 toolkit, respectively, to achieve comparable reductions in vocabulary size and to facilitate a unified approach to tokenizer construction across multiple languages.\nUTF8mj\n###table_1###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Roman to Character Decoder",
|
| 39 |
+
"text": "To translate the romanized units Roman back to original characters, we introduce an additional module in our system called the Roman to Character (R2C) decoder.\nThis Transformer-based model is designed to learn the multi-to-multi mappings between romanized text and original characters. For English, a language that already uses the Latin alphabet, the inputs and outputs remain unchanged as shown in (d) of in Table 1 ###reference_###.\nImportantly, despite the E2E training approach, the R2C decoder functions independently of the RNNT encoder outputs, focusing solely on learning the sequence-to-sequence mapping.\nOnly its loss is merged with the RNNT loss, allowing for the possibility of separate training with text data or integration with more advanced pre-trained translation models, such as LLMs.\nThis flexibility also enables the module\u2019s extension to other languages with complex scripts, like Korean and Japanese.\nDuring training, accurately labeled Roman sequences serve as inputs for the module.\nTo streamline the process, the decoding stage exclusively uses the greedy search hypothesis from the RNNT decoder as input, simplifying the overall pipeline."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experiments",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Data",
|
| 51 |
+
"text": "Code-switching data\nSEAME is a publicly available dataset designed for Mandarin-English speech recognition, containing Mandarin, English, and natural intra-sentential code-switching data from interviews and conversations. The dataset specifics, including the duration of each data type, are outlined in Table 2 ###reference_###.\n\u201cZH\u201d refers to the monolingual Mandarin segments, and \u201cEN\u201d to the monolingual English parts. The dataset includes approximately 60 hours of natural CS data, a quantity considered limited for training robust models. Notably, most of SEAME speakers are from Singapore and Malaysia, presenting accents different from Mainland China. For the purposes of model selection, 10% of the training set samples are randomly chosen to form a validation set.\nMonolingual data\nAISHELL-2 is an extensive 1000-hour open-source Mandarin speech corpus, the speakers of which mainly are from Mainland China.\nLibriSpeech comprises 960 hours of English speech from native speakers. These two monolingual datasets are used in our experiments to enhance the performance of code-switching ASR systems.\nEvaluation data\nFor evaluation, we stick to the data division of SEAME established by [37 ###reference_b37###], which includes a test set for Mandarin speech named test_man and another tailored to Southeast Asian accented English, labeled test_sge. The specific durations of these test sets are also detailed in Table 2 ###reference_###.\nAdditionally, to assess performance on monolingual data, test sets from AISHELL-2 (as2_test) and LibriSpeech (ls_clean) are utilized in our analysis."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Experiment Setup",
|
| 57 |
+
"text": "To evaluate monolingual test sets, Character Error Rate (CER) is applied to Mandarin, while Word Error Rate (WER) is used for English. For CS test sets, we employ Mixed Error Rate (MER), which incorporates word-level measurements for English and character-level assessments for Mandarin.\nIn all of our experiments, we use the Adam [38 ###reference_b38###] optimizer combined with a Cosine Annealing learning rate scheduler, including a warm-up phase of 10,000 steps.\nThe learning rate is set to peak at 1.5e-3 and then decrease to a minimum of 1e-6. In addition, we incorporate SpecAug [39 ###reference_b39###] during training process to enhance model robustness and performance.\nModel averaging is employed, and during evaluation, greedy search is utilized without the assistance of any external LM or re-scoring techniques.\nThe detailed training recipe will be open-sourced in NeMo."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.3",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Results",
|
| 63 |
+
"text": "Performance of training solely on SEAME dataset is detailed in Table 3 ###reference_###, showing that romanization encoding yields improved results for both the test_man and test_sge sets.\nIn Table 4 ###reference_###, we integrate monolingual datasets utilizing their rich acoustic and linguistic content to boost multilingual and CS ASR.\nThe proposed Roman-based method outperforms the Char-based model in both of the test sets of SEAME, with 10.77% and 9.43% MER reductions respectively.\nFurther analysis by dividing the CS test sets into Mandarin and English segments underscores the advantages of romanization encoding for both languages.\nMoreover, to thoroughly assess the model\u2019s proficiency in handling both CS and monolingual scenarios, results for monolingual test sets are presented, indicating an improvement in performance on the monolingual Mandarin test set, albeit with a slight decline on the monolingual English test set.\nTo balance monolingual and code-switching (CS) data within a fixed 2085-hour training data, we upscale CS data to 285 hours and reduce both AISHELL-2 (AS2) and LibriSpeech (LS) data to 900 hours each.\nThis adjustment result in significant MER reductions of 13.72% and 15.03% in CS test sets, demonstrating improvements over baseline models.\nPerformance variations in monolingual sets were observed, largely due to the differing accents and speaking styles of speakers such as speakers from Singapore and Malaysia versus those from Mainland China [29 ###reference_b29###].\nNevertheless, the model effectively retains its capability to process monolingual information, often delivering equal or superior performance."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.4",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Ablation Study",
|
| 69 |
+
"text": "The proposed romanization encoding approach is designed to be easily adaptable to various languages.\nThis section demonstrates the effectiveness of our method with Korean and Japanese, which face challenges such as limited publicly available corpora, insufficient for training large ASR models, and lack of CS data.\nThrough training a multilingual model on constraint datasets (less than 50 hours), we demonstrate our approach\u2019s capability and efficiency in data-limited situations and its potential extension to other languages.\nMandarin-Korean\n66footnotetext: https://github.com/goodatlas/zeroth ###reference_###\nA Mandarin-Korean bilingual ASR model was trained using the entire 50-hour Zeroth Korean6 dataset and 50 hours data randomly selected from the AISHELL-1 dataset [40 ###reference_b40###].\nEvaluations on monolingual Mandarin (test_as1) and Korean (test_zeroth) test sets utilized CERs for performance measurement.\nResults in Table 5 ###reference_### indicate that the Roman-based bilingual ASR model maintains performance on the Korean test set while achieving better results on the AISHELL-1 test set compared to a character-based model.\nMandarin-Japanese\nWe also experiment on Mandarin-Japanese Bilingual ASR,\ndrawing training data randomly from 50 hours of the AISHELL-1 dataset and 50 hours from the Japanese ReazonSpeech [41 ###reference_b41###] dataset.\nBy using romanization encoding, the concatenated vocabulary size is reduced from 8,507 to 2,298.\nAs we can see in Table 6 ###reference_###, when evaluated on AISHELL1 (test_as1) and ReazonSpeech (test_reazon) test sets, Roman-based model can perform better than Character encoding one in a large margin, which further indicates effeteness and scalability of our proposed Romanization encoding for other script-heavy languages.\nEvaluations of R2C module\nOur proposed method includes additional R2C module to transcribe Roman to characters. The total training parameters are at par with the baseline system since the size of RNNT outputs is decreased.\nAlthough the end-to-end inference speed for the proposed system can not be faster than the character encoding models but the training batch sizes can be set larger, which is essential for the Multilingual ASR model training. For instance, the reduction in the RNNT concatenated vocabulary size in the Mandarin-Korean Bilingual ASR model from 6,380 to 2,441, primarily due to Mandarin (as Korean mapping is nearly one-to-one), allowed for at least a 2X larger training batch size and more than 20% quicker RNNT inference compared to models using character encoding. We believe that this work could benefit not only the RNNT-based model but also the popular auto-regressive speech large foundational models."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Conclusion",
|
| 75 |
+
"text": "In this study, we introduce romanization encoding as a strategy to enhance multilingual ASR systems, particularly for languages with complex scripts.\nOur experiments with Mandarin-English CS ASR illustrate that employing a balanced tokenizer by romanized characters can lead to significant performance gains, with improvements of 13.71% and 15.03% on SEAME CS test sets.\nAdditionally, we have extended the application of Roman-based tokenizers to Mandarin-Korean and Mandarin-Japanese multilingual ASR systems, yielding promising results in terms of both faster training speeds and improved performance.\nLooking ahead, we plan to refine our approach by applying BPE or similar encoding methods to romanized text, aiming for a more compact and efficient vocabulary.\nEnhancing the R2C decoder with advanced models like LLMs could significantly boost overall accuracy.\nThe system\u2019s flexibility enables the use of synthetic text data to improve R2C decoder meanwhile leveraging the pre-trained audio encoder to mitigate the limited availability of code-switching audio data."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [],
|
| 79 |
+
"tables": {
|
| 80 |
+
"1": {
|
| 81 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1\">Table 1</span>: </span>Examples of romanization for Mandarin, Korean, Japanese and Mandarin-English. For Latin-based languages such as English, we bypass romanization and directly employ Byte Pair Encoding (BPE), setting the vocabulary count at 1,024. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.3\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T1.3.1.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.2\">language</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.3\">Char</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S3.T1.3.1.4\">vocab</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.5\">Roman</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.3.1.6\">vocab</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.2.1\">(a)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.2.2\">Mandarin</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.2.3\">\u5dee \u4e0d \u591a</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S3.T1.3.2.4\">5178</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.2.5\">cha4 bu4 duo1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.3.2.6\">1239</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.3.1\">(b)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.3.2\">Korean</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.3.3\">\uc548 \ub155 \ud558 \uc138 \uc694</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r\" id=\"S3.T1.3.3.4\">1202</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.3.5\">an nyeong ha se yo</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.3.3.6\">1202</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.4.1\">(c)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.4.2\">Japanese</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.4.3\">\u304b \u306a \u6f22 \u5b57</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r\" id=\"S3.T1.3.4.4\">3329</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.4.5\">ka na kan ji</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.3.4.6\">1059</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_t\" id=\"S3.T1.3.5.1\">(d)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_t\" id=\"S3.T1.3.5.2\">Mandarin-English</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_t\" id=\"S3.T1.3.5.3\">\u5dee \u4e0d \u591a ten minutes</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.3.5.4\">6202</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_t\" id=\"S3.T1.3.5.5\">cha4 bu4 duo1 ten minutes</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_t\" id=\"S3.T1.3.5.6\">2263</td>\n</tr>\n</table>\n</figure>",
|
| 82 |
+
"capture": "Table 1: Examples of romanization for Mandarin, Korean, Japanese and Mandarin-English. For Latin-based languages such as English, we bypass romanization and directly employ Byte Pair Encoding (BPE), setting the vocabulary count at 1,024. "
|
| 83 |
+
},
|
| 84 |
+
"2": {
|
| 85 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1.1\">Table 2</span>: </span>Duration composition of Mandarin (ZH), English (EN), and code-switching (CS) utterances in SEAME corpus. The duration of Mandarin dev sets (as2_test) and English (ls_clean) are also included. </figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.3\" style=\"width:433.6pt;height:116.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(49.9pt,-13.5pt) scale(1.29890985660521,1.29890985660521) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T2.3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.1.2\">train</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.1.3\">val</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.1.4\">test_man</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.1.5\">test_sge</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.1.6\">as2_test</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.1.7\">ls_clean</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.3.1.2.1\">duration(h)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.2.2\">85.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.2.3\">9.8</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.2.4\">7.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.2.5\">3.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.2.6\">4.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.1.2.7\">5.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.3.1.3.1\">ZH (%)</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.3.2\">16.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.3.3\">16.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.3.4\">13.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.3.5\">5.1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.3.6\">100</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.3.7\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.3.1.4.1\">EN (%)</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.4.2\">15.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.4.3\">16.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.4.4\">6.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.4.5\">41.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.4.6\">0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.3.1.4.7\">100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.3.1.5.1\">CS (%)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T2.3.1.5.2\">67.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T2.3.1.5.3\">67.3</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T2.3.1.5.4\">80.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T2.3.1.5.5\">53.8</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T2.3.1.5.6\">0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T2.3.1.5.7\">0</td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 86 |
+
"capture": "Table 2: Duration composition of Mandarin (ZH), English (EN), and code-switching (CS) utterances in SEAME corpus. The duration of Mandarin dev sets (as2_test) and English (ls_clean) are also included. "
|
| 87 |
+
},
|
| 88 |
+
"3": {
|
| 89 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.1.1\">Table 3</span>: </span>MERs (%) on the SEAME dataset reveal that the proposed romanization approach surpasses the character-based baseline by reducing vocabulary size, leading to a more balanced tokenizer and improved performance.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.3\" style=\"width:346.9pt;height:86.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(65.5pt,-16.4pt) scale(1.60641674622462,1.60641674622462) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.3.1.1.1\">Encoding</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.3.1.1.2\">vocab</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.3.1.1.3\">test_man</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.3.1.1.4\">test_sge</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.3.1.2.1\">Char+BPE</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.3.1.2.2\">6202</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.3.1.2.3\">22.26</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.3.1.2.4\">32.30</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.3.1.3.1\">Roman+BPE</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T3.3.1.3.2\">2263</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T3.3.1.3.3\">21.99</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T3.3.1.3.4\">31.45</td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 90 |
+
"capture": "Table 3: MERs (%) on the SEAME dataset reveal that the proposed romanization approach surpasses the character-based baseline by reducing vocabulary size, leading to a more balanced tokenizer and improved performance."
|
| 91 |
+
},
|
| 92 |
+
"4": {
|
| 93 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.1.1\">Table 4</span>: </span>Adding monolingual AISHELL2 (AS2) and LibriSpeech (LS) data during training. Results are evaluated with CS testsets including test_man and test_sge, and monolingual testsets including the testset of AS (as2_test) and test_clean of LS (ls_clean).\nTo balance the different acoustic and language information, we attempt upsampling CS data but keep the total number of training data fixed. </figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.3\" style=\"width:867.2pt;height:184pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(179.1pt,-38.0pt) scale(1.70399491446827,1.70399491446827) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T4.3.1.1.1\">Encoding</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S4.T4.3.1.1.2\">Dataset (hours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"6\" id=\"S4.T4.3.1.1.3\">SEAME</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.3.1.1.4\">AS2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.3.1.1.5\">LS</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.2.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.3.1.2.1.1\">ZH</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.2.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.3.1.2.2.1\">EN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.2.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.3.1.2.3.1\">SEAME</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.2.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.3.1.2.4.1\">AS2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.2.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.3.1.2.5.1\">LS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S4.T4.3.1.2.6\">test_man</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S4.T4.3.1.2.7\">test_sge</td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.2.8\">as2_test</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.2.9\">ls_clean</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.3.1\">MER</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.3.2\">CER</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.3.3\">WER</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.3.4\">MER</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.3.5\">CER</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.3.6\">WER</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.3.7\">CER</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.3.8\">WER</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.1\">Char</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.4.2\">BPE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.3\">85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.4\">1000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.4.5\">1000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.6\">17.64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.7\">16.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.8\">28.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.9\">25.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.10\">26.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.4.11\">29.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.4.12\">7.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.1.4.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.1.4.13.1\">2.60</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.1\">Roman</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.3.1.5.2\">BPE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.3\">85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.4\">1000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.3.1.5.5\">1000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.6\">15.74</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.7\">15.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.8\">25.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.9\">22.96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.10\">21.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.3.1.5.11\">27.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.3.1.5.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.1.5.12.1\">7.05</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.1.5.13\">3.26</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.1\">Roman</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.3.1.6.2\">BPE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.3\">285</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.4\">900</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.3.1.6.5\">900</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.1.6.6.1\">15.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.1.6.7.1\">14.79</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.1.6.8.1\">24.31</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.1.6.9.1\">21.54</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.1.6.10.1\">20.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.3.1.6.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.1.6.11.1\">25.70</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.3.1.6.12\">7.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.3.1.6.13\">2.75</td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 94 |
+
"capture": "Table 4: Adding monolingual AISHELL2 (AS2) and LibriSpeech (LS) data during training. Results are evaluated with CS testsets including test_man and test_sge, and monolingual testsets including the testset of AS (as2_test) and test_clean of LS (ls_clean).\nTo balance the different acoustic and language information, we attempt upsampling CS data but keep the total number of training data fixed. "
|
| 95 |
+
},
|
| 96 |
+
"5": {
|
| 97 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.2.1.1\">Table 5</span>: </span>CER (%) for a Mandarin-Korean Bilingual system trained on 50h+50h of data shows the proposed method reduces Mandarin vocabulary size, boosts its performance, and maintains Korean results.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T5.3\" style=\"width:346.9pt;height:89.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(68.7pt,-17.7pt) scale(1.65627134153227,1.65627134153227) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T5.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T5.3.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T5.3.1.1.1\">Encoding</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T5.3.1.1.2\">vocab</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T5.3.1.1.3\">test_as1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T5.3.1.1.4\">test_zeroth</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.3.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T5.3.1.2.1\">Char</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T5.3.1.2.2\">6380</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T5.3.1.2.3\">12.87</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T5.3.1.2.4\">1.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.3.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T5.3.1.3.1\">Roman</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T5.3.1.3.2\">2441</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T5.3.1.3.3\">12.60</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T5.3.1.3.4\">1.40</td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 98 |
+
"capture": "Table 5: CER (%) for a Mandarin-Korean Bilingual system trained on 50h+50h of data shows the proposed method reduces Mandarin vocabulary size, boosts its performance, and maintains Korean results."
|
| 99 |
+
},
|
| 100 |
+
"6": {
|
| 101 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.2.1.1\">Table 6</span>: </span>CERs (%) for Mandarin-Japanese Bilingual ASR models indicates Roman encoding reduces vocabulary size by 73% and significantly enhances performance for both Mandarin and Japanese.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T6.3\" style=\"width:346.9pt;height:89pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(68.2pt,-17.5pt) scale(1.64753110880305,1.64753110880305) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T6.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T6.3.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T6.3.1.1.1\">Encoding</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T6.3.1.1.2\">vocab</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T6.3.1.1.3\">test_as1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T6.3.1.1.4\">test_reazon</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.3.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T6.3.1.2.1\">Char</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T6.3.1.2.2\">8507</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T6.3.1.2.3\">19.75</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T6.3.1.2.4\">36.00</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.3.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T6.3.1.3.1\">Roman</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T6.3.1.3.2\">2298</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T6.3.1.3.3\">11.30</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S4.T6.3.1.3.4\">29.31</td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 102 |
+
"capture": "Table 6: CERs (%) for Mandarin-Japanese Bilingual ASR models indicates Roman encoding reduces vocabulary size by 73% and significantly enhances performance for both Mandarin and Japanese."
|
| 103 |
+
}
|
| 104 |
+
},
|
| 105 |
+
"image_paths": {
|
| 106 |
+
"1": {
|
| 107 |
+
"figure_path": "2407.04368v2_figure_1.png",
|
| 108 |
+
"caption": "Fig. 1: The proposed approach builds upon the baseline Fast-Conformer RNNT model, which incorporates a Concatenated Tokenizer and is outlined within a dashed rectangle. Instead of using direct Char input/output for Mandarin and BPE for English, our approach applies romanization encoding, feeding Pinyin (for Mandarin) and BPE (for English) into the Fast-Conformer RNNT. The Roman to Char Decoder then maps these inputs back to Char and BPE, respectively. The model is trained end-to-end (E2E), combining text-to-text loss with RNNT loss for optimization.",
|
| 109 |
+
"url": "http://arxiv.org/html/2407.04368v2/extracted/6075110/figures/fig5.jpg"
|
| 110 |
+
}
|
| 111 |
+
},
|
| 112 |
+
"validation": true,
|
| 113 |
+
"references": [
|
| 114 |
+
{
|
| 115 |
+
"1": {
|
| 116 |
+
"title": "\u201cRobust speech recognition via large-scale weak supervision,\u201d",
|
| 117 |
+
"author": "Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever,",
|
| 118 |
+
"venue": "in ICML, 2023, pp. 28492\u201328518.",
|
| 119 |
+
"url": null
|
| 120 |
+
}
|
| 121 |
+
},
|
| 122 |
+
{
|
| 123 |
+
"2": {
|
| 124 |
+
"title": "\u201cGoogle usm: Scaling automatic speech recognition beyond 100 languages,\u201d",
|
| 125 |
+
"author": "Yu Zhang, Wei Han, James Qin, Yongqiang Wang, Ankur Bapna, Zhehuai Chen, Nanxin Chen, Bo Li, Vera Axelrod, Gary Wang, et al.,",
|
| 126 |
+
"venue": "arXiv preprint arXiv:2303.01037, 2023.",
|
| 127 |
+
"url": null
|
| 128 |
+
}
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"3": {
|
| 132 |
+
"title": "\u201cScaling speech technology to 1,000+ languages,\u201d",
|
| 133 |
+
"author": "Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, et al.,",
|
| 134 |
+
"venue": "arXiv preprint arXiv:2305.13516, 2023.",
|
| 135 |
+
"url": null
|
| 136 |
+
}
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"4": {
|
| 140 |
+
"title": "\u201cPrompting the Hidden Talent of Web-Scale Speech Models for Zero-Shot Task Generalization,\u201d",
|
| 141 |
+
"author": "Puyuan Peng, Brian Yan, Shinji Watanabe, and David Harwath,",
|
| 142 |
+
"venue": "in Proc. Interspeech, 2023, pp. 396\u2013400.",
|
| 143 |
+
"url": null
|
| 144 |
+
}
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"5": {
|
| 148 |
+
"title": "\u201cEnd-to-end code-switching asr for low-resourced language pairs,\u201d",
|
| 149 |
+
"author": "Xianghu Yue, Grandee Lee, Emre Y\u0131lmaz, Fang Deng, and Haizhou Li,",
|
| 150 |
+
"venue": "in ASRU, 2019, pp. 972\u2013979.",
|
| 151 |
+
"url": null
|
| 152 |
+
}
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"6": {
|
| 156 |
+
"title": "\u201cOn the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition,\u201d",
|
| 157 |
+
"author": "Kazuki Irie, Rohit Prabhavalkar, Anjuli Kannan, Antoine Bruguier, David Rybach, and Patrick Nguyen,",
|
| 158 |
+
"venue": "in Proc. Interspeech, 2019, pp. 3800\u20133804.",
|
| 159 |
+
"url": null
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
{
|
| 163 |
+
"7": {
|
| 164 |
+
"title": "\u201cSubword regularization: Improving neural network translation models with multiple subword candidates,\u201d",
|
| 165 |
+
"author": "Taku Kudo,",
|
| 166 |
+
"venue": "in ACL, 2018, pp. 66\u201375.",
|
| 167 |
+
"url": null
|
| 168 |
+
}
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"8": {
|
| 172 |
+
"title": "\u201cExploring Lexicon-Free Modeling Units for End-to-End Korean and Korean-English Code-Switching Speech Recognition,\u201d",
|
| 173 |
+
"author": "Jisung Wang, Jihwan Kim, Sangki Kim, and Yeha Lee,",
|
| 174 |
+
"venue": "in Proc. Interspeech, 2020, pp. 1072\u20131075.",
|
| 175 |
+
"url": null
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"9": {
|
| 180 |
+
"title": "\u201cA comparison of modeling units in sequence-to-sequence speech recognition with the transformer on mandarin chinese,\u201d",
|
| 181 |
+
"author": "Shiyu Zhou, Linhao Dong, Shuang Xu, and Bo Xu,",
|
| 182 |
+
"venue": "in International Conference on Neural Information Processing, 2018, pp. 210\u2013220.",
|
| 183 |
+
"url": null
|
| 184 |
+
}
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"10": {
|
| 188 |
+
"title": "\u201cIs word segmentation necessary for deep learning of chinese representations?,\u201d",
|
| 189 |
+
"author": "Xiaoya Li, Yuxian Meng, Xiaofei Sun, Qinghong Han, Arianna Yuan, and Jiwei Li,",
|
| 190 |
+
"venue": "in ACL, 2019, pp. 3242\u20133252.",
|
| 191 |
+
"url": null
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"11": {
|
| 196 |
+
"title": "\u201cMassively multilingual asr on 70 languages: Tokenization, architecture, and generalization capabilities,\u201d",
|
| 197 |
+
"author": "Andros Tjandra, Nayan Singhal, David Zhang, Ozlem Kalinli, Abdelrahman Mohamed, Duc Le, and Michael L Seltzer,",
|
| 198 |
+
"venue": "in ICASSP, 2023, pp. 1\u20135.",
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"12": {
|
| 204 |
+
"title": "\u201cLanguage models are unsupervised multitask learners,\u201d",
|
| 205 |
+
"author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.,",
|
| 206 |
+
"venue": ".",
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"13": {
|
| 212 |
+
"title": "\u201cBilingual end-to-end asr with byte-level subwords,\u201d",
|
| 213 |
+
"author": "Liuhui Deng, Roger Hsiao, and Arnab Ghoshal,",
|
| 214 |
+
"venue": "in ICASSP, 2022, pp. 6417\u20136421.",
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"14": {
|
| 220 |
+
"title": "\u201cSub-Character Tokenization for Chinese Pretrained Language Models,\u201d",
|
| 221 |
+
"author": "Chenglei Si, Zhengyan Zhang, Yingfa Chen, Fanchao Qi, Xiaozhi Wang, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun,",
|
| 222 |
+
"venue": "Transactions of the Association for Computational Linguistics, pp. 469\u2013487, 2023.",
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"15": {
|
| 228 |
+
"title": "\u201cOn the effectiveness of pinyin-character dual-decoding for end-to-end mandarin chinese asr,\u201d",
|
| 229 |
+
"author": "Zhao Yang, Dianwen Ng, Xiao Fu, Liping Han, Wei Xi, Rui Wang, Rui Jiang, and Jizhong Zhao,",
|
| 230 |
+
"venue": "arXiv preprint arXiv:2201.10792, 2022.",
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"16": {
|
| 236 |
+
"title": "\u201cDecoupling recognition and transcription in mandarin asr,\u201d",
|
| 237 |
+
"author": "Jiahong Yuan, Xingyu Cai, Dongji Gao, Renjie Zheng, Liang Huang, and Kenneth Church,",
|
| 238 |
+
"venue": "in ASRU, 2021, pp. 1019\u20131025.",
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"17": {
|
| 244 |
+
"title": "\u201cNon-autoregressive mandarin-english code-switching speech recognition,\u201d",
|
| 245 |
+
"author": "Shun-Po Chuang, Heng-Jui Chang, Sung-Feng Huang, and Hung-yi Lee,",
|
| 246 |
+
"venue": "in ASRU, 2021, pp. 465\u2013472.",
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"18": {
|
| 252 |
+
"title": "\u201cOut-of-the-box universal Romanization tool uroman,\u201d",
|
| 253 |
+
"author": "Ulf Hermjakob, Jonathan May, and Kevin Knight,",
|
| 254 |
+
"venue": "in Proceedings of ACL 2018, System Demonstrations, Fei Liu and Thamar Solorio, Eds., Melbourne, Australia, July 2018, pp. 13\u201318, Association for Computational Linguistics.",
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"19": {
|
| 260 |
+
"title": "\u201cRomanization-based large-scale adaptation of multilingual language models,\u201d",
|
| 261 |
+
"author": "Sukannya Purkayastha, Sebastian Ruder, Jonas Pfeiffer, Iryna Gurevych, and Ivan Vuli\u0107,",
|
| 262 |
+
"venue": "in The 2023 Conference on Empirical Methods in Natural Language Processing, 2023.",
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"20": {
|
| 268 |
+
"title": "\u201cJoint approach to deromanization of code-mixed texts,\u201d",
|
| 269 |
+
"author": "Rashed Rubby Riyadh and Grzegorz Kondrak,",
|
| 270 |
+
"venue": "in Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, Ann Arbor, Michigan, June 2019, pp. 26\u201334, Association for Computational Linguistics.",
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"21": {
|
| 276 |
+
"title": "\u201cMulti-Encoder-Decoder Transformer for Code-Switching Speech Recognition,\u201d",
|
| 277 |
+
"author": "Xinyuan Zhou, Emre Y\u0131lmaz, Yanhua Long, Yijie Li, and Haizhou Li,",
|
| 278 |
+
"venue": "in Proc. Interspeech, 2020, pp. 1042\u20131046.",
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"22": {
|
| 284 |
+
"title": "\u201cBi-encoder transformer network for mandarin-english code-switching speech recognition using mixture of experts.,\u201d",
|
| 285 |
+
"author": "Yizhou Lu, Mingkun Huang, Hao Li, Jiaqi Guo, and Yanmin Qian,",
|
| 286 |
+
"venue": "in Proc. Interspeech, 2020, pp. 4766\u20134770.",
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"23": {
|
| 292 |
+
"title": "\u201cInvestigating end-to-end speech recognition for mandarin-english code-switching,\u201d",
|
| 293 |
+
"author": "Changhao Shan, Chao Weng, Guangsen Wang, Dan Su, Min Luo, Dong Yu, and Lei Xie,",
|
| 294 |
+
"venue": "in ICASSP, 2019, pp. 6056\u20136060.",
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"24": {
|
| 300 |
+
"title": "\u201cReducing language confusion for code-switching speech recognition with token-level language diarization,\u201d",
|
| 301 |
+
"author": "Hexin Liu, Haihua Xu, Leibny Paola Garcia, Andy WH Khong, Yi He, and Sanjeev Khudanpur,",
|
| 302 |
+
"venue": "in ICASSP, 2023, pp. 1\u20135.",
|
| 303 |
+
"url": null
|
| 304 |
+
}
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"25": {
|
| 308 |
+
"title": "\u201cTransformer-transducers for code-switched speech recognition,\u201d",
|
| 309 |
+
"author": "Siddharth Dalmia, Yuzong Liu, Srikanth Ronanki, and Katrin Kirchhoff,",
|
| 310 |
+
"venue": "in ICASSP, 2021, pp. 5859\u20135863.",
|
| 311 |
+
"url": null
|
| 312 |
+
}
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"26": {
|
| 316 |
+
"title": "\u201cTraining code-switching language model with monolingual data,\u201d",
|
| 317 |
+
"author": "Shun-Po Chuang, Tzu-Wei Sung, and Hung-yi Lee,",
|
| 318 |
+
"venue": "in ICASSP, 2020, pp. 7949\u20137953.",
|
| 319 |
+
"url": null
|
| 320 |
+
}
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"27": {
|
| 324 |
+
"title": "\u201cCode-switched language models using neural based synthetic data from parallel sentences,\u201d",
|
| 325 |
+
"author": "Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung,",
|
| 326 |
+
"venue": "in Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), 2019, pp. 271\u2013280.",
|
| 327 |
+
"url": null
|
| 328 |
+
}
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"28": {
|
| 332 |
+
"title": "\u201cImproving Code-Switching Language Modeling with Artificially Generated Texts Using Cycle-Consistent Adversarial Networks,\u201d",
|
| 333 |
+
"author": "Chia-Yu Li and Ngoc Thang Vu,",
|
| 334 |
+
"venue": "in Proc. Interspeech, 2020, pp. 1057\u20131061.",
|
| 335 |
+
"url": null
|
| 336 |
+
}
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"29": {
|
| 340 |
+
"title": "\u201cSEAME: a Mandarin-English code-switching speech corpus in south-east asia,\u201d",
|
| 341 |
+
"author": "Dau-Cheng Lyu, Tien-Ping Tan, Eng Siong Chng, and Haizhou Li,",
|
| 342 |
+
"venue": "in Proc. Interspeech, 2010, pp. 1986\u20131989.",
|
| 343 |
+
"url": null
|
| 344 |
+
}
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"30": {
|
| 348 |
+
"title": "\u201cFast conformer with linearly scalable attention for efficient speech recognition,\u201d",
|
| 349 |
+
"author": "Dima Rekesh, Nithin Rao Koluguri, Samuel Kriman, Somshubra Majumdar, Vahid Noroozi, He Huang, Oleksii Hrinchuk, Krishna Puvvada, Ankur Kumar, Jagadeesh Balam, and Boris Ginsburg,",
|
| 350 |
+
"venue": "in ASRU, 2023, pp. 1\u20138.",
|
| 351 |
+
"url": null
|
| 352 |
+
}
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"31": {
|
| 356 |
+
"title": "\u201cSequence transduction with recurrent neural networks,\u201d",
|
| 357 |
+
"author": "Alex Graves,",
|
| 358 |
+
"venue": "arXiv preprint arXiv:1211.3711, 2012.",
|
| 359 |
+
"url": null
|
| 360 |
+
}
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"32": {
|
| 364 |
+
"title": "\u201cEfficient sequence transduction by jointly predicting tokens and durations,\u201d",
|
| 365 |
+
"author": "Hainan Xu, Fei Jia, Somshubra Majumdar, He Huang, Shinji Watanabe, and Boris Ginsburg,",
|
| 366 |
+
"venue": "in ICML, 2023, pp. 38462\u201338484.",
|
| 367 |
+
"url": null
|
| 368 |
+
}
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"33": {
|
| 372 |
+
"title": "\u201cPruned RNN-T for fast, memory-e\ufb00icient ASR training,\u201d",
|
| 373 |
+
"author": "Fangjun Kuang, Liyong Guo, Wei Kang, Long Lin, Mingshuang Luo, Zengwei Yao, and Daniel Povey,",
|
| 374 |
+
"venue": "in Proc. Interspeech, 2022, pp. 2068\u20132072.",
|
| 375 |
+
"url": null
|
| 376 |
+
}
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"34": {
|
| 380 |
+
"title": "\u201cUnified model for code-switching speech recognition and language identification based on concatenated tokenizer,\u201d",
|
| 381 |
+
"author": "Kunal Dhawan, KDimating Rekesh, and Boris Ginsburg,",
|
| 382 |
+
"venue": "in Proceedings of the 6th Workshop on Computational Approaches to Linguistic Code-witching, 2023, pp. 74\u201382.",
|
| 383 |
+
"url": null
|
| 384 |
+
}
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"35": {
|
| 388 |
+
"title": "\u201cLibrispeech: An asr corpus based on public domain audio books,\u201d",
|
| 389 |
+
"author": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur,",
|
| 390 |
+
"venue": "in ICASSP, 2015, pp. 5206\u20135210.",
|
| 391 |
+
"url": null
|
| 392 |
+
}
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"36": {
|
| 396 |
+
"title": "\u201cAishell-2: Transforming mandarin asr research into industrial scale,\u201d",
|
| 397 |
+
"author": "Jiayu Du, Xingyu Na, Xuechen Liu, and Hui Bu,",
|
| 398 |
+
"venue": "arXiv preprint arXiv:1808.10583, 2018.",
|
| 399 |
+
"url": null
|
| 400 |
+
}
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"37": {
|
| 404 |
+
"title": "\u201cOn the End-to-End Solution to Mandarin-English Code-Switching Speech Recognition,\u201d",
|
| 405 |
+
"author": "Zhiping Zeng, Yerbolat Khassanov, Van Tung Pham, Haihua Xu, Eng Siong Chng, and Haizhou Li,",
|
| 406 |
+
"venue": "in Proc. Interspeech, 2019, pp. 2165\u20132169.",
|
| 407 |
+
"url": null
|
| 408 |
+
}
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"38": {
|
| 412 |
+
"title": "\u201cAdam: A method for stochastic optimization,\u201d",
|
| 413 |
+
"author": "Diederik P. Kingma and Jimmy Ba,",
|
| 414 |
+
"venue": "in ICLR, 2015.",
|
| 415 |
+
"url": null
|
| 416 |
+
}
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"39": {
|
| 420 |
+
"title": "\u201cSpecaugment: A simple data augmentation method for automatic speech recognition,\u201d",
|
| 421 |
+
"author": "Daniel S Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D Cubuk, and Quoc V Le,",
|
| 422 |
+
"venue": "in Proc. Interspeech, 2019, pp. 2613\u20132617.",
|
| 423 |
+
"url": null
|
| 424 |
+
}
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"40": {
|
| 428 |
+
"title": "\u201cAishell-1: An open-source mandarin speech corpus and a speech recognition baseline,\u201d",
|
| 429 |
+
"author": "Hui Bu, Jiayu Du, Xingyu Na, Bengu Wu, and Hao Zheng,",
|
| 430 |
+
"venue": "in COCOSDA, 2017, pp. 1\u20135.",
|
| 431 |
+
"url": null
|
| 432 |
+
}
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"41": {
|
| 436 |
+
"title": "\u201cReazonspeech: A free and massive corpus for japanese asr,\u201d",
|
| 437 |
+
"author": "Yue Yin1 Daijiro Mori1 Seiji Fujimoto,",
|
| 438 |
+
"venue": ".",
|
| 439 |
+
"url": null
|
| 440 |
+
}
|
| 441 |
+
}
|
| 442 |
+
],
|
| 443 |
+
"url": "http://arxiv.org/html/2407.04368v2"
|
| 444 |
+
}
|
20241217/2407.16424v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2407.17418v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2408.01639v2.json
ADDED
|
@@ -0,0 +1,448 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Coordinating Planning and Tracking in Layered Control Policies via Actor-Critic Learning",
|
| 3 |
+
"abstract": "We propose a reinforcement learning (RL)-based algorithm to jointly train (1) a trajectory planner and (2) a tracking controller in a layered control architecture. Our algorithm arises naturally from a rewrite of the underlying optimal control problem that lends itself to an actor-critic learning approach. By explicitly learning a dual network to coordinate the interaction between the planning and tracking layers, we demonstrate the ability to achieve an effective consensus between the two components, leading to an interpretable policy. We theoretically prove that our algorithm converges to the optimal dual network in the Linear Quadratic Regulator (LQR) setting and empirically validate its applicability to nonlinear systems through simulation experiments on a unicycle model.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Layered control architectures (Matni et al., 2024 ###reference_b1###; Chiang et al., 2007 ###reference_b2###) are ubiquitous in complex cyber-physical systems, such as power networks, communication networks, and autonomous robots. For example, a typical autonomous robot has an autonomy stack consisting of decision-making, trajectory optimization, and low-level control. However, despite the widespread presence of such layered control architectures, there has been a lack of a principled framework for their design, especially in the data-driven regime.\nIn this work, we propose an algorithm for jointly learning a trajectory planner and a tracking controller. We start from an optimal control problem and show that a suitable relaxation of the problem naturally decomposes into reference generation and trajectory tracking layers. We then propose an algorithm to train a layered policy parameterized in a way that parallels this decomposition using actor-critic methods. Different from previous methods, we show how a dual network can be trained to coordinate the trajectory optimizer and the tracking controller. Our theoretical analysis and numerical experiments demonstrate that the proposed algorithm can achieve good performance in various settings while enjoying inherent interpretability and modularity."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.1.1",
|
| 19 |
+
"parent_section_id": "1.1",
|
| 20 |
+
"section_name": "1.1.1 Layered control architectures",
|
| 21 |
+
"text": "The idea of layering has been studied extensively in the multi-rate control literature (Rosolia et al., 2022 ###reference_b3###; Csomay-Shanklin et al., 2022 ###reference_b4###), through the lens of optimization decomposition (Chiang et al., 2007 ###reference_b2###; Matni and Doyle, 2016 ###reference_b5###), and for specific application domains (Samad et al., 2007 ###reference_b6###; Samad and Annaswamy, 2017 ###reference_b7###; Jiang, 2018 ###reference_b8###). Recently, Matni et al. (Matni et al., 2024 ###reference_b1###) proposed a quantitative framework for the design and analysis of layered control architectures, which has since been instantiated to various control and robotics applications (Srikanthan et al., 2023a ###reference_b9###, b ###reference_b10###; Zhang et al., 2024 ###reference_b11###). Within this framework, our work is most related to Srikanthan et al. (2023b ###reference_b10###); Zhang et al. (2024 ###reference_b11###), which seek to design trajectory planners based on past data of a tracking controller. However, we consider the case where the low-level tracking controller is not given and has to be learned with the trajectory planner. We also provide a more principled approach to coordinating planning and tracking that leverages a dual network."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "1.1.2",
|
| 25 |
+
"parent_section_id": "1.1",
|
| 26 |
+
"section_name": "1.1.2 Hierarchical reinforcement learning",
|
| 27 |
+
"text": "Recently, reinforcement learning-based methods have demonstrated impressive performance on highly complex dynamical systems (Kumar et al., 2021 ###reference_b12###; Kaufmann et al., 2023 ###reference_b13###). Within the RL literature, our approach is most closely related to the idea of goal-conditioned reinforcement learning (Dayan and Hinton, 1992 ###reference_b14###; Kulkarni et al., 2016 ###reference_b15###; Levy et al., 2017 ###reference_b16###; Nachum et al., 2018a ###reference_b17###; Vezhnevets et al., 2017 ###reference_b18###; Nachum et al., 2018b ###reference_b19###). In this framework, an upper-level agent periodically specifies a goal for the lower-level agent to execute. However, the \u201cintrinsic\u201d reward used to train the lower-level agent is usually heuristically chosen. Nachum et al. (Nachum et al., 2018b ###reference_b19###) derived a principled objective for the lower-level agent based on a suboptimality bound introduced by the hierarchical structure, but they focus on the case where the goal is specified as a learned low-dimensional representation. We focus on the case where the dynamics are deterministic and derive a simple quadratic objective for the lower-level agent (tracking layer). We also structure our upper-level agent (planning layer) to generate full trajectories instead of single waypoints."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "1.1.3",
|
| 31 |
+
"parent_section_id": "1.1",
|
| 32 |
+
"section_name": "1.1.3 Actor-critic methods",
|
| 33 |
+
"text": "The actor-critic method (Silver et al., 2014 ###reference_b20###; Lillicrap et al., 2015 ###reference_b21###; Fujimoto et al., 2018 ###reference_b22###) describes a class of reinforcement learning algorithms that simultaneously learn a policy and its associated value function. These algorithms have achieved great success with continuous control tasks and have found various applications in the controls and robotics community (Wang and Fazlyab, 2024 ###reference_b23###; Grandesso et al., 2023 ###reference_b24###). In this paper, we use actor-critic methods to learn a tracking controller and its value function, where the latter is used to help the trajectory planner determine how difficult a generated trajectory is for the tracking controller to follow."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "1.2",
|
| 37 |
+
"parent_section_id": "1",
|
| 38 |
+
"section_name": "Statement of Contributions",
|
| 39 |
+
"text": "Our contribution is three-fold. First, we propose a novel way of parameterizing layered policies based on a principled derivation. In this parameterization, we introduce a dual network to coordinate the trajectory planner and the tracking controller. We show how this dual network can be trained jointly with other components in the layered policy in an RL fashion. Secondly, we show theoretically and empirically that our algorithm for updating the dual network can recover the optimal dual network parameters for unconstrained linear quadratic regulator (LQR) problems. Finally, we evaluate our method empirically on constrained LQR problems and the unicycle environment to demonstrate its potential to be applied to more complex systems."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Problem Formulation",
|
| 45 |
+
"text": "We consider a discrete-time finite-horizon optimal control problem with state and control input :\nHere, is a fixed time horizon, and respectively denote the state and control trajectory. and are the state and control costs, respectively. We assume that the input cost and the state and input constraints decouple across time, and denote them respectively by , , and . The initial condition is sampled i.i.d. from a possibly unknown distribution .\nAs per the reinforcement learning convention, we assume that we only have access to the dynamics via a simulator, i.e., that we do not know explicitly, but can simulate the dynamics for any and . However, we do assume that we have access to the cost functions , , as they are usually designed by the users, instead of being an inherent hidden part of the system. We also assume that we know the constraints and for the same reason.\nOur goal is to learn a layered policy that consists of 1) a trajectory planner\nthat takes in an initial condition and outputs a reference trajectory , and 2) a tracking controller\nthat takes in the current state and a reference trajectory to output a control action to best track the given trajectory. We now decompose problem (1 ###reference_###) such that it may inform a suitable parameterization for the planning and tracking policies, and ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Layered Approach to Optimal Control",
|
| 51 |
+
"text": "We first consider a variation of problem (1 ###reference_###) with a fixed initial condition , and rewrite it into a form that has a natural layered control architecture interpretation. For ease of notation, we use unsubscripted letters to denote the respective trajectories stacked as a column vector\nWe begin the rewrite of problem (1 ###reference_###) by introducing a redundant variable to get an equivalent problem\nwhere we use the fact that to move the state cost and constraint from onto . Defining the indicator functions\nwe write the partial augmented Lagrangian of problem (2 ###reference_###) in terms of the (scaled) dual variable\nApplying dual ascent to this augmented Lagrangian, we obtain the following method-of-multiplier updates\nwhich will converge to locally optimal primal and dual variables given mild assumptions on the smoothness and convexity of and the constraints in the neighborhood of the optimal point (See Bertsekas (2014 ###reference_b25###, \u00a72)).\nFor a layered interpretation, we note that the primal update (4 ###reference_###) can be written as a nested optimization problem\nwhere is the locally optimal value of the -minimization step\nWe immediately recognize that optimal control problem (7 ###reference_###) is finding the control action to minimize a quadratic tracking cost for the reference trajectory\nThus, this nested rewrite can be seen as breaking the primal minimization problem (4 ###reference_###) into a trajectory optimization problem (6 ###reference_###) that seeks to find the best reference and a tracking problem (7 ###reference_###) that seeks to best track the perturbed trajectory . A subtlety here is that the planned trajectory, , and the trajectory sent to the tracking controller, , are different. To understand this discrepancy, let us first consider a similar, but perhaps more intuitive, reference optimization problem:\nThis heuristics-based approach, employed in previous works such as Srikanthan et al. (2023b ###reference_b10###); Zhang et al. (2024 ###reference_b11###), seeks to find a reference that balances minimizing the nominal cost and not incurring high tracking cost .\nIn these works, the solution is then sent to the tracking controller unperturbed.\nA problem with this approach is that unless the tracking controller can execute the given reference perfectly, the executed trajectory will differ from the planned reference . One can mitigate this deviation by multiplying the tracking cost with a large weight, but this can quickly become numerically ill-conditioned, or bias the planned trajectory towards overly conservative and easy-to-track behaviors.\nIn these works, the solution is then sent to the tracking controller unperturbed.\nA problem with this approach is that unless the tracking controller can execute the given reference perfectly, the executed trajectory will differ from the planned reference . One can mitigate this deviation by multiplying the tracking cost with a large weight, but this can quickly become numerically ill-conditioned, or bias the planned trajectory towards overly conservative and easy-to-track behaviors.\nReturning to the method-of-multiplier updates (4 ###reference_###) and (5 ###reference_###), we note that, under suitable technical conditions, solving the planning layer problem (6 ###reference_###) using the locally optimal dual variable leads to the feasible solution satisfying . In particular, the perturbed reference trajectory is sent to the tracking controller defined by problem (7 ###reference_###), and this results in the executed state trajectory matching the reference . This discussion highlights the role of the locally optimal dual variable as coordinating the planning and tracking layers, and motivates our approach of explicitly modeling this dual variable in our learning framework.\nFollowing this intuition, in the next section, we show how to parameterize and to approximately solve (6 ###reference_###) and (7 ###reference_###), respectively. In practice, finding with the iterative update in (5 ###reference_###) can be prohibitively expensive. To circumvent this issue, we recognize that any locally optimal dual variable can be written as a function of the initial condition . We thus seek to learn an approximate map to predict this locally optimal dual variable from the initial condition .111We have been somewhat cavalier in our assumption that such a locally optimal dual variable exists. We note that notions of local duality theory, see for example (Luenberger et al., 1984 ###reference_b26###, Ch 14.2), guarantee the existence of such a locally optimal dual variable under mild assumptions of local convexity.\nWe close this section by noting that the above derivation assumes that the reference trajectory is of the same dimension as the state, i.e., that . However, if the state cost and constraints only require a subset of the states, i.e., if they are defined in terms of , with , then one can modify the discussion above by replacing the redundant constraint with , so that the reference only needs to be specified on the lower dimensional output . We refer the readers to Appendix D ###reference_### for the details."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Actor-Critic Learning in the Layered Control Architecture",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Parameterization of the Layered Policy",
|
| 63 |
+
"text": "We parameterize our layered policy so that its structure parallels the dual ascent updates (6 ###reference_###) and (7 ###reference_###). The tracking controller , specified by learnable parameters , seeks to approximate a feedback controller that solves the tracking problem (7 ###reference_###).222The finite-horizon nature of (7 ###reference_###) calls for a time-varying controller. Thus, the correct and associated value function need to be conditioned on the time step . In our experiments, we show that approximating this with a time-invariant controller works well for the time horizons we consider. The trajectory generator seeks to approximately solve the planning problem (6 ###reference_###). It has learnable parameters and and is defined as the solution to the optimization problem\nThus generates a reference trajectory from initial condition by solving problem (9 ###reference_###). The objective of this optimization problem contains two learned components, and , specified by parameters and , respectively. First, is a dual network that seeks to predict the locally optimal dual variable from initial condition . Then, the tracking value function takes in an initial state and a reference trajectory and learns to predict the quadratic tracking cost (7 ###reference_###) that the policy will incur on this reference trajectory. Summarizing, our layered policy consists of three learned components: the dual network , the low-layer tracking policy , and its associated value function . In what follows, we explain how we learn the tracking value function and policy jointly via the actor-critic method, and how to update the dual network in a way similar to dual ascent."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Learning the Tracking Controller via Actor-Critic Method",
|
| 69 |
+
"text": "We use the actor-critic method to jointly learn the tracking value function and policy . We are learning a deterministic policy and its value function, a setting that has been extensively explored and for which many off-the-shelf algorithms exist (Silver et al., 2014 ###reference_b20###; Lillicrap et al., 2015 ###reference_b21###; Fujimoto et al., 2018 ###reference_b22###). In what follows, we specify the RL problem for learning the tracking controller and treat the actor-critic algorithm as a black-box solver for finding our desired parameters and .\nWe define an augmented system with the state , which concatenates with a -step reference trajectory , where specifies the tracking controller\u2019s horizon of look-ahead. The augmented state transitions are then given by\nwhere is a block-upshift operator that shifts the reference trajectory forward by one timestep. The cost of the augmented system is chosen to match the tracking optimization problem (7 ###reference_###), i.e., we set\nThe initial condition is found by first sampling , and then setting to the first steps of the reference generated by . We then run the actor-critic algorithm on this augmented system to jointly learn and ."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.3",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Learning the Dual Network",
|
| 75 |
+
"text": "We design our dual network update as an iterative procedure that mirrors the dual ascent update step (5 ###reference_###), which moves the dual variable in the direction of the mismatch between reference and execution . At each iteration, we sample a batch of initial conditions , and for each , we solve the planning problem (9 ###reference_###) with current parameters and to obtain reference trajectories We then send the perturbed trajectories to the tracking controller to obtain the executed trajectories\nSimilar to the dual ascent step, we then perform a gradient ascent step in to move in the direction of :\nwhere denotes the Jacobian of w.r.t. . Note that even though and implicitly depend on , similar to the dual ascent step (5 ###reference_###), we do not differentiate through these two terms when computing this gradient. In the next section, we show that for the case of linear quadratic regulators, this update for the dual network parameter converges to the vicinity of the optimal parameter if the tracking problem is solved to sufficient accuracy."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.4",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Summary of the Algorithm",
|
| 81 |
+
"text": "We summarize our algorithm in Algorithm 1 ###reference_thm1###. The outer loop of the algorithm (Line 1-9) corresponds to the dual update procedure described in Section 4.3 ###reference_###. Within each iteration of the outer loop, we also run the actor-critic algorithm to update the tracking policy and its value function (Line 5-8). Note that we do not wait for the tracking controller to converge before starting the dual update. In Section 6 ###reference_###, we empirically validate that dual learning can start to make progress even when the tracking controller is still suboptimal. After the components are learned for the specified iterations, we directly apply the learned policy for any new initial condition ."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Analysis for Linear Quadratic Regulator",
|
| 87 |
+
"text": "In this section, we consider the unconstrained linear quadratic regulator (LQR) problem and show that our method learns to predict the optimal dual variable if we solve the tracking problem well enough. We focus on the dual update because the tracking problem (7 ###reference_###) reduces to standard LQR, to which existing results (Bradtke et al., 1994 ###reference_b27###; Tu and Recht, 2018 ###reference_b28###) are readily applicable. In what follows, we define the problem we analyze, and first show that dual network updates of the form (12 ###reference_###) converge to the optimal dual map if one perfectly solves the planning (6 ###reference_###) and tracking problem (7 ###reference_###). We then present a robustness result which shows that the algorithm will converge to the vicinity of the optimal dual variable if we solve the tracking problem with a small error.\nWe consider the instantiation of (2 ###reference_###) with the dynamics\nand cost functions\nwhere , and . States and control inputs are unconstrained, i.e., . The initial condition is sampled i.i.d. from the standard normal distribution .\nIn this case, strong duality holds, and the optimal dual variable333If not further specified, when we refer to or the dual variable, we mean the dual variable associated with the constraint in problem (2 ###reference_###) is a linear function of the initial condition . (See Lemma 2 ###reference_ma2### in Appendix B ###reference_###.) We thus parameterize the dual network as a linear map"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.1",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "With Optimal Tracking",
|
| 93 |
+
"text": "We first consider the following update rule, wherein we assume that the planning (6 ###reference_###) and tracking problems (7 ###reference_###) are solved optimally. At each iteration, we first sample a minibatch of initial conditions , and use the current to predict the optimal dual variable\nWe assume we perfectly solve the trajectory optimization problem\nwhere is the optimal value of the tracking problem\nThis is a standard LQR optimal control problem, and closed-form expressions for the optimizers and the value function are readily expressed in terms of the solution to a discrete algebraic Riccati equation.\nAfter solving (17 ###reference_###), we update the dual map as\nA feature of this update rule is that the difference between the reference and the executed trajectory can be written out in closed form as follows.\nGiven the update rules (16 ###reference_###), (17 ###reference_###), the difference between the updates and can be written as a linear map of the initial condition as\nwhere and are matrices of appropriate dimensions that depend on , and is symmetric negative definite. See Lemma 3 ###reference_ma3### in Appendix B ###reference_### for definitions of and .\nWe leverage Lemma 1 ###reference_ma1###, and that the matrix is negative definite, to show that the updates (15 ###reference_###)-(18 ###reference_###) make progress in expectation.\nConsider the cost functions (14 ###reference_###) and dynamics (13 ###reference_###), and fix an initial .\nFix a step size and mini-batch size . The iterates generated by the updates (15 ###reference_###)-(18 ###reference_###) satisfy\nwhere is a function of , , , and .\nSee Appendix B ###reference_###.\n\u220e"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.2",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "With Suboptimal Tracking",
|
| 99 |
+
"text": "We consider the case where we only have approximate solutions to the updates (17 ###reference_###) and (16 ###reference_###). We leverage the structural properties of the LQR problem, and parameterize the optimal tracking controller as a linear map, and its value function as a quadratic function of the augmented state. Denote as the open-loop response of initial condition , we consider perturbations in the optimal value function as\nand perturbations in the control action as\nwhere denotes the solution of (17 ###reference_###). We note that the perturbations represent the difference between learned and optimal policies, and have been shown to decay with the number of transitions used for training (Bradtke et al., 1994 ###reference_b27###; Tu and Recht, 2018 ###reference_b28###). Perturbation analysis on Theorem 1 ###reference_orem1### shows that if the learned controller is close to optimal, the dual map will converge to a small ball around , where the radius of the ball depends on the error of the learned tracking controller. Due to space constraints, we present an informal version of this result here, and relegate a precise statement and proof to Appendix C ###reference_###.\n(informal)\nConsider the dynamics (13 ###reference_###) and cost (14 ###reference_###). Consider the update rules (15 ###reference_###)-(18 ###reference_###) with the perturbations (19 ###reference_###) and (20 ###reference_###). Denote the size of the perturbations as . Given any , if the perturbations are sufficiently small, there exist step size and batch size such that\nwhere , is an error term depending polynomially on its arguments."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Experiments",
|
| 105 |
+
"text": "We now proceed to evaluate our algorithm numerically on LQR and unicycle systems. For all the experiments, we use the CleanRL (Huang et al., 2022 ###reference_b29###) implementation of Twin-Delayed Deep Deterministic Policy Gradient (TD3) (Fujimoto et al., 2018 ###reference_b22###) as our actor-critic algorithm. All code needed to reproduce the examples found in this section will be made available at the following repository: https://github.com/unstable-zeros/layered-ac ###reference_ac###."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6.1",
|
| 109 |
+
"parent_section_id": "6",
|
| 110 |
+
"section_name": "Unconstrained LQR",
|
| 111 |
+
"text": "We begin by validating our algorithm on unconstrained LQR problems and show that our algorithm achieves near-optimal performance and near-perfect reference tracking. We consider linear systems (13 ###reference_###) with dimensions and horizon . For each system size, we randomly sample 15 pairs of dynamics matrices 444Each entry is sampled i.i.d from the standard normal distribution. and normalize so that the system is marginally stable (). For all setups, we consider a quadratic cost (14 ###reference_###) with . We have , and the initial state . We leverage the linearity of the dynamics to parameterize the tracking controller to be linear, and the value function to be quadratic in the augmented state (10 ###reference_###). Since is quadratic, the optimization problem for the trajectory planner (9 ###reference_###) is a QP, which we solve with CVXPY (Diamond and Boyd, 2016 ###reference_b30###). We parameterize the dual network to be a linear map as in (15 ###reference_###). We train the tracking policy and the dual network jointly for transitions ( episodes) with dual batch size , before freezing the tracking policy and just updating the dual network for another transitions (250 episodes). We specify the detailed training parameters in Table 6 ###reference_### in the Appendix. During training, we periodically evaluate the learned policy by applying it on initial conditions. We then record the cost it achieved and the average tracking deviation . We report relative costs normalized by the optimal cost of solving (2 ###reference_###) directly with the corresponding true dynamics and cost function. Thus, a relative cost of is optimal. The results are summarized below.\n###table_1### In Table 1 ###reference_###, we summarize the cost and mean tracking deviations evaluated at the end of training.555The reported numbers are their respective medians taken over random LQR instances. We first note that the learned policy achieves near-optimal cost and near-perfect tracking for all the system sizes considered. Figure 3 ###reference_### shows a representative sample trajectory that has a mean tracking deviation of . This shows that our parameterization and learning algorithm are able to find good policies with only black-box access to the underlying dynamics. We note that the performance degrades slightly as the size of the system grows. This is likely because learning the tracking controller becomes more difficult as the size of the state space increases. However, even for the largest system we considered (), the cost of the learned controller is still only above optimal.\n###figure_1### We visualize the algorithm\u2019s progress for learning the dual map in Figure 2 ###reference_###. Recall that our theory suggests that in the unconstrained LQR case, the dual map weight will converge to the neighborhood of the optimal dual map , where the radius of the neighborhood depends on the quality of the learned controller. This is indeed the case shown in Figure 2 ###reference_###, where the norm of the difference first decays exponentially before reaching a plateau. We note that this plot also validates our choice to start learning the dual network before the tracking controller training has converged, as progress is made starting at the very beginning of the training.\n###table_2### We now compare our approach to the heuristic approach of generating trajectories without using the learned dual variable (Srikanthan et al., 2023b ###reference_b10###; Zhang et al., 2024 ###reference_b11###), summarized in equation (8 ###reference_###). We use the same parameters to train a tracking controller and a value function, with the only difference being that solves (8 ###reference_###) instead of (9 ###reference_###). We show the results in Table 2 ###reference_###.\nFirst, the heuristic policy is outperformed by our approach both in terms of cost and tracking deviation across all the different system sizes, showing the value of learning to predict the dual variable. We note that the difference is especially pronounced for tracking deviation. Since the dual network learned to preemptively perturb the reference to minimize tracking error, it achieves near-perfect tracking and an order of magnitude lower tracking error. This suggests that learning the dual network is especially important in achieving good coordination between the trajectory planner and the tracking controller.\n###table_3### Finally, we note that the penalty parameter is a hyperparameter that needs to be tuned when implementing Algorithm 1 ###reference_thm1###. Since directly affects the objective of the tracking problem, it begs the question of whether the choice of significantly affects the performance of our algorithm. We test this hypothesis on 15 randomly sampled underactuated systems where and . We use the same set of hyperparameters as above except for . We report the results in Table 3 ###reference_###. From Table 3 ###reference_###, we see that algorithm behavior is robust to the choice of , so long as it is large enough; indeed, only the case of leads to significant performance degradation."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6.2",
|
| 115 |
+
"parent_section_id": "6",
|
| 116 |
+
"section_name": "LQR with State Constraints",
|
| 117 |
+
"text": "In the unconstrained case, the map from the initial condition to the optimal dual variable is linear. In this section, we consider the case where inequality constraints are introduced and this map is no longer linear. We show that by parameterizing the dual map as a neural network, we can learn well-performing policies that respect the constraints. Similar to the experiments above, we randomly sample LQR systems where . Here we consider stable systems with . The time horizon is fixed to and cost matrices are . We add the constraint that\ni.e., that we restrict all states except for the initial state to be above . Since the additional constraint does not affect the tracking problem, we still parametrize the actor and critic as linear and quadratic, respectively. Since the optimal dual map is no longer linear, we parameterize the dual map as a neural network with a single hidden layer with ReLU activation. Note that the optimization problem for trajectory planning (9 ###reference_###) is still a QP as it does not depend on the form of the dual network. To account for the nonlinearity of the dual network, we increase the dual batch size to trajectories, and train the policy and dual network for transitions, before freezing the tracking controller and training the dual network for another transitions ( episodes). We specify the detailed training parameters in Table 7 ###reference_###. We report the relative cost and mean constraint violation666We measure the constraint violation as . Reported values are the medians over the systems. in Table 4 ###reference_### and show a representative sample trajectory in Figure 3 ###reference_###.\n###table_4### ###figure_2### As seen in Table 4 ###reference_### and the sample trajectories Figure 3 ###reference_###, we can learn to generate reference trajectories satisfying the constraints. The planned trajectory is well-adapted to the learned tracking controller so that the executed trajectory also avoids constraint violations. This shows empirically that our algorithm can effectively learn to predict the dual variable even when the desired dual map is nonlinear. We again compare the results with solving for the reference without learning a dual network (8 ###reference_###), and observe that learning the dual network results in better coordination between the planner and the tracking controller. As a result, the approach with dual learning achieves better constraint satisfaction rates. We conclude this subsection by noting that in practice, one can tighten the constraints to ensure constraint satisfaction, even when there is tracking error. How to leverage the learned dual network to inform constraint tightening is an interesting direction of future work."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "6.3",
|
| 121 |
+
"parent_section_id": "6",
|
| 122 |
+
"section_name": "Unicycle",
|
| 123 |
+
"text": "Finally, we apply our algorithm to controlling a nonlinear unicycle system with state and control input\nwhere are the and positions, the heading angle, and the velocity of the unicycle. The two control inputs are the acceleration and the angular velocity (steering) . We consider the discrete-time nonlinear dynamics given by\nWe consider the problem of steering the vehicle to the origin, specified by the quadratic objective (14 ###reference_###) with , and . The initial condition is sampled uniformly on the unit circle. We take . The trajectory planner learns to generate references only for the positions instead of the full state.\nThe nonlinearity of the dynamics presents several challenges. First, we can no longer assume the form of the optimal tracking controller and its value function and have to parameterize both as neural networks. As a result of this non-convex parameterization of , the reference generation problem (9 ###reference_###) becomes nonconvex. We use gradient descent to find reference trajectories that are locally optimal for the trajectory planning problem. Secondly, the nonlinear nature of the dynamics makes the learning of a tracking controller more difficult. To address this, we warmstart the tracking controller by training on simple line trajectories before running Algorithm 1 ###reference_thm1### in full with reference trajectory generated by solving (9 ###reference_###).This overcomes the difficulty that (9 ###reference_###) tends to generate bad trajectories when is randomly initialized. We train the tracking controller on simple references for transitions ( episodes) as a warmstart, and then run Algorithm 1 ###reference_thm1### for transitions ( episodes). We run the experiment both with and without training the dual network and report our results in Table 5 ###reference_###. To make the result interpretable, we normalize the cost against iLQR as a baseline.777For each initial condition, we run iLQR with two random dynamically feasible initial trajectories. We take the lesser cost as iLQR\u2019s cost.\nFirst, we see that our learned policy achieves performance comparable to that of iLQR\u2014we however emphasize that our policy is trained without explicit knowledge of the dynamics of the system. We note that the costs achieved by the policy learned with and without a dual network are similar. This could be due to the the trajectory generation problem (9 ###reference_###) not being solved exactly. However, learning with a dual network again leads to significantly better tracking performance, highlighting the importance of dual networks in coordinating the planning and tracking layers.\n###table_5###"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "7",
|
| 127 |
+
"parent_section_id": null,
|
| 128 |
+
"section_name": "Conclusion",
|
| 129 |
+
"text": "We proposed a principled way of parameterizing and learning a layered control policy composed of a trajectory planner and a tracking controller. We derived our parameterization from an optimal control problem and showed that a dual network emerges naturally to coordinate the two components. We showed that our algorithm can learn to predict the optimal dual variable for unconstrained LQR problems and validated this theory via simulation experiments. Further simulation experiments also demonstrated the potential of applying this method to nonlinear control problems. Future work will explore using the dual network to inform constraint tightening and parameterizing the planner (9 ###reference_###) directly as a neural network to reduce online computation."
|
| 130 |
+
}
|
| 131 |
+
],
|
| 132 |
+
"appendix": [
|
| 133 |
+
{
|
| 134 |
+
"section_id": "Appendix 1",
|
| 135 |
+
"parent_section_id": null,
|
| 136 |
+
"section_name": "Appendix A Experiment Setup",
|
| 137 |
+
"text": "See Table 6 ###reference_###.\n###table_6### See Table 7 ###reference_###.\n###table_7### The dual network is chosen to be an MLP with one hidden layer of 128 neurons. The activation is chosen to be ReLU.\nSee Table 8 ###reference_###.\n###table_8### The dual network is chosen to be an MLP with one hidden layer of 128 neurons. The actor and critic are both MLPs with a single hidden layer of 256 neurons. All activation functions are ReLU."
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"section_id": "Appendix 2",
|
| 141 |
+
"parent_section_id": null,
|
| 142 |
+
"section_name": "Appendix B Proofs for Theorem 1",
|
| 143 |
+
"text": "The LQR problem considered in Section 5 ###reference_### with costs (14 ###reference_###) and dynamics (13 ###reference_###) admits closed-form solutions to the -update (16 ###reference_###) and -update (17 ###reference_###). In this section, we begin by showing that for this problem, the dual variable can indeed be written as a linear map of the initial condition . We then derive the closed-form solutions to the updates (16 ###reference_###) and (17 ###reference_###). Finally, we use a contraction argument to show our desired result in Theorem 1 ###reference_orem1###. In the process, we make clear the conditions on step size and batch size to guarantee the contraction.\nFor easing notation, for the rest of this section, we again define . We also define the matrices\nFor the problem considered in section 2 ###reference_###, given the initial condition , the optimal dual variable can be expressed as a unique linear map from as\nFrom the KKT condition for the optimization problem (2 ###reference_###), we have that\nSolving for , we get that\nAlso from the KKT condition, we have that . Subbing this into the above expression and rearranging terms, we get that\nFinally, from the equivalence of the original problem (1 ###reference_###) and the redundant problem (2 ###reference_###), we see that can be expressed in closed form as\n\u220e\nWe now derive the closed-form update rules and show the following.\nIn the LQR setting, we have that the difference between the updates and can be written as a linear map of the initial condition as\nwhere\nis symmetric negative definite.\nWe start by deriving the closed-form expressions for solving both the updates (16 ###reference_###) and (17 ###reference_###). We begin by writing out the update rule more explicitly. First, we note that we can write all satisfying the dynamics constraint as\nDefine\nWe can solve for the optimal control action in closed form as\nwhere we defined\nSubbing this back, we get that\nwhere we defined\nNote that we arrived at from a partial minimization on , which preserves the convexity of the problem on . Thus, we have that\nWith the knowledge of , we can now solve for in closed-form.\nSince , we have that\nThus, overall, we update rules are given as\nFrom the closed-form update rules specified above, we have that\nDenote\nwe get the expression that we desire.\nFrom the fact that , , and , it follows that .\n\u220e\nNow, recall that for any , the optimal dual map satisfies that , where is the optimal dual variable for the given . From the KKT condition of (2 ###reference_###), we know that induces a fixed point to the update rules (16 ###reference_###) and (17 ###reference_###). Thus,\nSince this holds for all , we have that\nBefore starting with the main proof, we present the following lemma.\nFor a set of i.i.d normal vectors , we have that\nWe begin by rewriting the above expression with a change of variables, where we define\nSince is a sum of outer products of independently distributed normal random vectors, follows the Wishart distribution. Specifically, we have that\nThe above expression can then be bounded as\nwhere we first used Jensen\u2019s inequality and then, in the following equalities, used the properties of Wishart random variables.\n\u220e\nWe can now start analyzing the progress of the dual update. First, combining Lemma 3 ###reference_ma3### with the dual update rule (18 ###reference_###), we have that\nThus, for the expected norm of interest, we have\nwhere in step we used the above fact that , and in the final step, we used Lemma 4 ###reference_ma4###. We have the desired contraction if\nNote that choosing\nminimizes the norm . Solving for with this choice of , we have that needs to satisfy that\nFollowing these appropriate choices of and , we have"
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"section_id": "Appendix 3",
|
| 147 |
+
"parent_section_id": null,
|
| 148 |
+
"section_name": "Appendix C Proof of Theorem 2",
|
| 149 |
+
"text": "We start by showing that when and (See Lemma 3 ###reference_ma3### above) are perturbed by small additive perturbations, the algorithm can still converge to the vicinity of the optimal if the perturbations are small enough.\nConsider the perturbations in and as\nIf the perturbation in satisfies that\nfor any , then we can pick step size and batch size such that the update (21 ###reference_###) converges to the vicinity of the optimal dual variable, i.e., that\nwhere , and\nWe follow similar steps as when we showed the similar result for the unperturbed case.\nwhere the last step followed the same steps in the proof of Theorem 1 ###reference_orem1###. We now proceed to bound the second term.\nCombining this with the unperturbed result, we get that\nwhere\nFor the iterations to be contractive, we would need that\nAgain, note that choosing\nminimizes the norm . Thus, for the inequality to hold, needs to satisfy\nfor some or equivalently\nWe can then pick\nso that\nThe result then follows from telescoping the sum.\n\u220e\nWe now consider the perturbations we described in Section 5.2 ###reference_### and bound the terms and in terms of the perturbations .\nConsider the perturbations specified in (19 ###reference_###) and (20 ###reference_###). Denote the norms of the perturbations as\nIf , we have that and as\nwhere .\nWe first note that the perturbed update rule gives the updates\nThe difference between and can be summarized as\nwhere\nWe now proceed to bound . Denote the reduced SVD of as\nFor the sake of simplicity, we denote . By Woodbury matrix identity, we have that\nThus, we have that\nThe first term corresponds to the unperturbed . We thus proceed to bound all the other terms left. Define\nwe have that\nTo bound the last term, we use the fact that\nWe invoke the reverse triangle inequality to get that\nFrom the assumption that , we have that\nThus, we have that\nThus, we overall have that\nand that\n\u220e\nCombining the two Lemmas above, we can now state Theorem 2 ###reference_orem2### formally.\n(Formal statement of Theorem 2 ###reference_orem2###)\nConsider the cost functions (14 ###reference_###) and dynamics (13 ###reference_###). Consider the update rules (15 ###reference_###)-(18 ###reference_###) with the perturbations (19 ###reference_###) and (20 ###reference_###). Denote the size of the perturbations as\nDefine as in (23 ###reference_###) and as in (24 ###reference_###). Given any , if the perturbations satisfy that,\nfor any , one can pick\nand batch size\nsuch that\nwhere , and\nWe verify the predictions of the theorem qualitatively in the experiment section."
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"section_id": "Appendix 4",
|
| 153 |
+
"parent_section_id": null,
|
| 154 |
+
"section_name": "Appendix D Planning Only Subset of States",
|
| 155 |
+
"text": "We consider the case where the state cost and constraints only require a subset of the states, i.e., if they are defined in terms of , with . Specifically, we consider the problem\nIn this case, one can modify the redundant constraint to be to arrive at the following redundant problem\nwhere we wrote to denote with a slight abuse of notation. A similar derivation to that in Section 3 ###reference_### then arrives at the following iterative update\nand the nested optimization\nwhere is the locally optimal value of the -minimization step\nNote that the only difference is that the trajectory planner only generates reference trajectories on the states required for the state cost and constraints, and that the tracking cost for the lower-level controller also only concerns tracking those states."
|
| 156 |
+
}
|
| 157 |
+
],
|
| 158 |
+
"tables": {
|
| 159 |
+
"1": {
|
| 160 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T1.3\">\n<tr class=\"ltx_tr\" id=\"S6.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.2\">Relative Cost ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.3.3\">Mean Tracking Deviation ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.3.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.3.4.1\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.4.2\">1.004</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.4.3\">0.002</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.3.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.3.5.1\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.5.2\">1.009</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.5.3\">0.003</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.3.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.3.6.1\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.6.2\">1.020</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.6.3\">0.008</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.3.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.3.7.1\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.3.7.2\">1.031</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T1.3.7.3\">0.009</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T1.5.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S6.T1.6.2\" style=\"font-size:90%;\">LQR Results on Varying System Sizes.</span></figcaption>\n</figure>",
|
| 161 |
+
"capture": "Table 1: LQR Results on Varying System Sizes."
|
| 162 |
+
},
|
| 163 |
+
"2": {
|
| 164 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T2.11\">\n<tr class=\"ltx_tr\" id=\"S6.T2.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.2.2.2\">Relative Cost ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.3.3.3\">Mean Tracking Deviation ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.5.5.3\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.4.4.1\">1.012 ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.5.5.2\">0.046 ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.7.7.3\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.6.6.1\">1.028 ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.7.7.2\">0.045 ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.9.9.3\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.8.8.1\">1.036 ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.9.9.2\">0.061 ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.11.11.3\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T2.10.10.1\">1.052 ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T2.11.11.2\">0.062 ()</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T2.13.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S6.T2.14.2\" style=\"font-size:90%;\">LQR Results without Dual Learning. Numbers in parentheses denote the percentage difference from the approach with dual learning.</span></figcaption>\n</figure>",
|
| 165 |
+
"capture": "Table 2: LQR Results without Dual Learning. Numbers in parentheses denote the percentage difference from the approach with dual learning."
|
| 166 |
+
},
|
| 167 |
+
"3": {
|
| 168 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T3.3\">\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.2\">0.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.3\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.4\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.5\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.6\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.1\">Relative Cost ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.2\">2.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.3\">1.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.4\">1.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.5\">1.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.2.2.6\">1.19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T3.3.3.1\">Mean Deviation ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T3.3.3.2\">0.039</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T3.3.3.3\">0.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T3.3.3.4\">0.005</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T3.3.3.5\">0.003</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T3.3.3.6\">0.003</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T3.7.2.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S6.T3.5.1\" style=\"font-size:90%;\">LQR Results on Varying Hyperparameter </span></figcaption>\n</figure>",
|
| 169 |
+
"capture": "Table 3: LQR Results on Varying Hyperparameter "
|
| 170 |
+
},
|
| 171 |
+
"4": {
|
| 172 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T4.2\">\n<tr class=\"ltx_tr\" id=\"S6.T4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T4.2.2.3\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.1\">Relative Cost ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T4.2.2.2\">Mean Constraint Violation ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T4.2.3.1\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T4.2.3.2\">1.011</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T4.2.3.3\">0.0002</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T4.2.4.1\">No Dual (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.01639v2#S3.E8\" title=\"In 3 Layered Approach to Optimal Control \u2023 Coordinating Planning and Tracking in Layered Control Policies via Actor-Critic Learning\"><span class=\"ltx_text ltx_ref_tag\">8</span></a>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T4.2.4.2\">1.014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T4.2.4.3\">0.002</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T4.4.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S6.T4.5.2\" style=\"font-size:90%;\">Constrained LQR Results</span></figcaption>\n</figure>",
|
| 173 |
+
"capture": "Table 4: Constrained LQR Results"
|
| 174 |
+
},
|
| 175 |
+
"5": {
|
| 176 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T5.2\">\n<tr class=\"ltx_tr\" id=\"S6.T5.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T5.2.2.3\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T5.1.1.1\">Relative Cost ()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T5.2.2.2\">Mean Tracking Deviation ()</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T5.2.3.1\">iLQR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T5.2.3.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T5.2.3.3\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T5.2.4.1\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T5.2.4.2\">1.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T5.2.4.3\">0.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.2.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T5.2.5.1\">No Dual</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T5.2.5.2\">1.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S6.T5.2.5.3\">0.05</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T5.4.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"S6.T5.5.2\" style=\"font-size:90%;\">Unicycle Results</span></figcaption>\n</figure>",
|
| 177 |
+
"capture": "Table 5: Unicycle Results"
|
| 178 |
+
},
|
| 179 |
+
"6": {
|
| 180 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T6\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A1.T6.2\">\n<tr class=\"ltx_tr\" id=\"A1.T6.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T6.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T6.2.1.1.1\">Parameter</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T6.2.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T6.2.1.2.1\">Value</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T6.2.2.1\">TD3 Policy Noise</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T6.2.2.2\">5e-4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T6.2.3.1\">TD3 Noise Clip</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T6.2.3.2\">1e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T6.2.4.1\">TD3 Exploration Noise</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T6.2.4.2\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T6.2.5.1\">actor learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T6.2.5.2\">3e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T6.2.6.1\">actor batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T6.2.6.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T6.2.7.1\">critic learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T6.2.7.2\">3e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T6.2.8.1\">critic batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T6.2.8.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T6.2.9.1\">dual learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T6.2.9.2\">0.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T6.2.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"A1.T6.2.10.1\">dual batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"A1.T6.2.10.2\">5</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A1.T6.3.1.1\" style=\"font-size:90%;\">Table 6</span>: </span><span class=\"ltx_text\" id=\"A1.T6.4.2\" style=\"font-size:90%;\">Hyperparameters for the Unconstrained LQR Experiments</span></figcaption>\n</figure>",
|
| 181 |
+
"capture": "Table 6: Hyperparameters for the Unconstrained LQR Experiments"
|
| 182 |
+
},
|
| 183 |
+
"7": {
|
| 184 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T7\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A1.T7.2\">\n<tr class=\"ltx_tr\" id=\"A1.T7.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T7.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T7.2.1.1.1\">Parameter</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T7.2.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T7.2.1.2.1\">Value</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T7.2.2.1\">TD3 Policy Noise</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T7.2.2.2\">5e-4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T7.2.3.1\">TD3 Noise Clip</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T7.2.3.2\">1e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T7.2.4.1\">TD3 Exploration Noise</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T7.2.4.2\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T7.2.5.1\">actor learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T7.2.5.2\">3e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T7.2.6.1\">actor batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T7.2.6.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T7.2.7.1\">critic learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T7.2.7.2\">3e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T7.2.8.1\">critic batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T7.2.8.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T7.2.9.1\">dual learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T7.2.9.2\">3e-4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T7.2.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"A1.T7.2.10.1\">dual batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"A1.T7.2.10.2\">40</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A1.T7.3.1.1\" style=\"font-size:90%;\">Table 7</span>: </span><span class=\"ltx_text\" id=\"A1.T7.4.2\" style=\"font-size:90%;\">Hyperparameters for the Constrained LQR Experiments</span></figcaption>\n</figure>",
|
| 185 |
+
"capture": "Table 7: Hyperparameters for the Constrained LQR Experiments"
|
| 186 |
+
},
|
| 187 |
+
"8": {
|
| 188 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T8\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A1.T8.2\">\n<tr class=\"ltx_tr\" id=\"A1.T8.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T8.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T8.2.1.1.1\">Parameter</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T8.2.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T8.2.1.2.1\">Value</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T8.2.2.1\">TD3 Policy Noise</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T8.2.2.2\">1e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T8.2.3.1\">TD3 Noise Clip</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T8.2.3.2\">1e-2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T8.2.4.1\">TD3 Exploration Noise</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T8.2.4.2\">6e-2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T8.2.5.1\">actor learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T8.2.5.2\">1e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T8.2.6.1\">actor batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T8.2.6.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T8.2.7.1\">critic learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T8.2.7.2\">1e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T8.2.8.1\">critic batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T8.2.8.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"A1.T8.2.9.1\">dual learning rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T8.2.9.2\">5e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"A1.T8.2.10.1\">dual batch size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"A1.T8.2.10.2\">60</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A1.T8.3.1.1\" style=\"font-size:90%;\">Table 8</span>: </span><span class=\"ltx_text\" id=\"A1.T8.4.2\" style=\"font-size:90%;\">Hyperparameters for the Unicycle Experiments</span></figcaption>\n</figure>",
|
| 189 |
+
"capture": "Table 8: Hyperparameters for the Unicycle Experiments"
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
"image_paths": {
|
| 193 |
+
"1": {
|
| 194 |
+
"figure_path": "2408.01639v2_figure_1.png",
|
| 195 |
+
"caption": "Figure 2: Training progress for the dual map parameter \u0398\u0398\\Thetaroman_\u0398. Here, the solid lines are the median over 15151515 random LQR instances, and the shaded regions represent the 25t\u2062hsuperscript25\ud835\udc61\u210e25^{th}25 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT to 75t\u2062hsuperscript75\ud835\udc61\u210e75^{th}75 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT percentile.",
|
| 196 |
+
"url": "http://arxiv.org/html/2408.01639v2/x1.png"
|
| 197 |
+
},
|
| 198 |
+
"2": {
|
| 199 |
+
"figure_path": "2408.01639v2_figure_2.png",
|
| 200 |
+
"caption": "Figure 3: A Representative Sample Trajectory for Constrained LQR.",
|
| 201 |
+
"url": "http://arxiv.org/html/2408.01639v2/x2.png"
|
| 202 |
+
}
|
| 203 |
+
},
|
| 204 |
+
"validation": true,
|
| 205 |
+
"references": [
|
| 206 |
+
{
|
| 207 |
+
"1": {
|
| 208 |
+
"title": "Towards a theory of control architecture: A quantitative framework for layered multi-rate control.",
|
| 209 |
+
"author": "Nikolai Matni, Aaron D Ames, and John C Doyle.",
|
| 210 |
+
"venue": "arXiv preprint arXiv:2401.15185, 2024.",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"2": {
|
| 216 |
+
"title": "Layering as optimization decomposition: A mathematical theory of network architectures.",
|
| 217 |
+
"author": "Mung Chiang, Steven H Low, A Robert Calderbank, and John C Doyle.",
|
| 218 |
+
"venue": "Proceedings of the IEEE, 95(1):255\u2013312, 2007.",
|
| 219 |
+
"url": null
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"3": {
|
| 224 |
+
"title": "Unified multirate control: From low-level actuation to high-level planning.",
|
| 225 |
+
"author": "Ugo Rosolia, Andrew Singletary, and Aaron D Ames.",
|
| 226 |
+
"venue": "IEEE Transactions on Automatic Control, 67(12):6627\u20136640, 2022.",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"4": {
|
| 232 |
+
"title": "Multi-rate planning and control of uncertain nonlinear systems: Model predictive control and control lyapunov functions.",
|
| 233 |
+
"author": "Noel Csomay-Shanklin, Andrew J Taylor, Ugo Rosolia, and Aaron D Ames.",
|
| 234 |
+
"venue": "In 2022 IEEE 61st Conference on Decision and Control (CDC), pages 3732\u20133739. IEEE, 2022.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"5": {
|
| 240 |
+
"title": "A theory of dynamics, control and optimization in layered architectures.",
|
| 241 |
+
"author": "Nikolai Matni and John C Doyle.",
|
| 242 |
+
"venue": "In 2016 American Control Conference (ACC), pages 2886\u20132893. IEEE, 2016.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"6": {
|
| 248 |
+
"title": "System architecture for process automation: Review and trends.",
|
| 249 |
+
"author": "Tariq Samad, Paul McLaughlin, and Joseph Lu.",
|
| 250 |
+
"venue": "Journal of Process Control, 17(3):191\u2013201, 2007.",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"7": {
|
| 256 |
+
"title": "Controls for smart grids: Architectures and applications.",
|
| 257 |
+
"author": "Tariq Samad and Anuradha M Annaswamy.",
|
| 258 |
+
"venue": "Proceedings of the IEEE, 105(11):2244\u20132261, 2017.",
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"8": {
|
| 264 |
+
"title": "An improved cyber-physical systems architecture for industry 4.0 smart factories.",
|
| 265 |
+
"author": "Jehn-Ruey Jiang.",
|
| 266 |
+
"venue": "Advances in Mechanical Engineering, 10(6):1687814018784192, 2018.",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"9": {
|
| 272 |
+
"title": "Augmented lagrangian methods as layered control architectures.",
|
| 273 |
+
"author": "Anusha Srikanthan, Vijay Kumar, and Nikolai Matni.",
|
| 274 |
+
"venue": "arXiv preprint arXiv:2311.06404, 2023a.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"10": {
|
| 280 |
+
"title": "A data-driven approach to synthesizing dynamics-aware trajectories for underactuated robotic systems.",
|
| 281 |
+
"author": "Anusha Srikanthan, Fengjun Yang, Igor Spasojevic, Dinesh Thakur, Vijay Kumar, and Nikolai Matni.",
|
| 282 |
+
"venue": "arXiv preprint arXiv:2307.13782, 2023b.",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"11": {
|
| 288 |
+
"title": "Why change your controller when you can change your planner: Drag-aware trajectory generation for quadrotor systems.",
|
| 289 |
+
"author": "Hanli Zhang, Anusha Srikanthan, Spencer Folk, Vijay Kumar, and Nikolai Matni.",
|
| 290 |
+
"venue": "arXiv preprint arXiv:2401.04960, 2024.",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"12": {
|
| 296 |
+
"title": "Rma: Rapid motor adaptation for legged robots.",
|
| 297 |
+
"author": "Ashish Kumar, Zipeng Fu, Deepak Pathak, and Jitendra Malik.",
|
| 298 |
+
"venue": "arXiv preprint arXiv:2107.04034, 2021.",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"13": {
|
| 304 |
+
"title": "Champion-level drone racing using deep reinforcement learning.",
|
| 305 |
+
"author": "Elia Kaufmann, Leonard Bauersfeld, Antonio Loquercio, Matthias M\u00fcller, Vladlen Koltun, and Davide Scaramuzza.",
|
| 306 |
+
"venue": "Nature, 620(7976):982\u2013987, 2023.",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"14": {
|
| 312 |
+
"title": "Feudal reinforcement learning.",
|
| 313 |
+
"author": "Peter Dayan and Geoffrey E Hinton.",
|
| 314 |
+
"venue": "Advances in neural information processing systems, 5, 1992.",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"15": {
|
| 320 |
+
"title": "Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation.",
|
| 321 |
+
"author": "Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum.",
|
| 322 |
+
"venue": "Advances in neural information processing systems, 29, 2016.",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"16": {
|
| 328 |
+
"title": "Learning multi-level hierarchies with hindsight.",
|
| 329 |
+
"author": "Andrew Levy, George Konidaris, Robert Platt, and Kate Saenko.",
|
| 330 |
+
"venue": "arXiv preprint arXiv:1712.00948, 2017.",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"17": {
|
| 336 |
+
"title": "Data-efficient hierarchical reinforcement learning.",
|
| 337 |
+
"author": "Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine.",
|
| 338 |
+
"venue": "Advances in neural information processing systems, 31, 2018a.",
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"18": {
|
| 344 |
+
"title": "Feudal networks for hierarchical reinforcement learning.",
|
| 345 |
+
"author": "Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu.",
|
| 346 |
+
"venue": "In International Conference on Machine Learning, pages 3540\u20133549. PMLR, 2017.",
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"19": {
|
| 352 |
+
"title": "Near-optimal representation learning for hierarchical reinforcement learning.",
|
| 353 |
+
"author": "Ofir Nachum, Shixiang Gu, Honglak Lee, and Sergey Levine.",
|
| 354 |
+
"venue": "arXiv preprint arXiv:1810.01257, 2018b.",
|
| 355 |
+
"url": null
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"20": {
|
| 360 |
+
"title": "Deterministic policy gradient algorithms.",
|
| 361 |
+
"author": "David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller.",
|
| 362 |
+
"venue": "In International conference on machine learning, pages 387\u2013395. Pmlr, 2014.",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"21": {
|
| 368 |
+
"title": "Continuous control with deep reinforcement learning.",
|
| 369 |
+
"author": "Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra.",
|
| 370 |
+
"venue": "arXiv preprint arXiv:1509.02971, 2015.",
|
| 371 |
+
"url": null
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"22": {
|
| 376 |
+
"title": "Addressing function approximation error in actor-critic methods.",
|
| 377 |
+
"author": "Scott Fujimoto, Herke Hoof, and David Meger.",
|
| 378 |
+
"venue": "In International conference on machine learning, pages 1587\u20131596. PMLR, 2018.",
|
| 379 |
+
"url": null
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"23": {
|
| 384 |
+
"title": "Actor-critic physics-informed neural lyapunov control.",
|
| 385 |
+
"author": "Jiarui Wang and Mahyar Fazlyab.",
|
| 386 |
+
"venue": "arXiv preprint arXiv:2403.08448, 2024.",
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"24": {
|
| 392 |
+
"title": "Cacto: Continuous actor-critic with trajectory optimization\u2014towards global optimality.",
|
| 393 |
+
"author": "Gianluigi Grandesso, Elisa Alboni, Gastone P Rosati Papini, Patrick M Wensing, and Andrea Del Prete.",
|
| 394 |
+
"venue": "IEEE Robotics and Automation Letters, 2023.",
|
| 395 |
+
"url": null
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"25": {
|
| 400 |
+
"title": "Constrained optimization and Lagrange multiplier methods.",
|
| 401 |
+
"author": "Dimitri P Bertsekas.",
|
| 402 |
+
"venue": "Academic press, 2014.",
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"26": {
|
| 408 |
+
"title": "Linear and nonlinear programming, volume 2.",
|
| 409 |
+
"author": "David G Luenberger, Yinyu Ye, et al.",
|
| 410 |
+
"venue": "Springer, 1984.",
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"27": {
|
| 416 |
+
"title": "Adaptive linear quadratic control using policy iteration.",
|
| 417 |
+
"author": "Steven J Bradtke, B Erik Ydstie, and Andrew G Barto.",
|
| 418 |
+
"venue": "In Proceedings of 1994 American Control Conference-ACC\u201994, volume 3, pages 3475\u20133479. IEEE, 1994.",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"28": {
|
| 424 |
+
"title": "Least-squares temporal difference learning for the linear quadratic regulator.",
|
| 425 |
+
"author": "Stephen Tu and Benjamin Recht.",
|
| 426 |
+
"venue": "In International Conference on Machine Learning, pages 5005\u20135014. PMLR, 2018.",
|
| 427 |
+
"url": null
|
| 428 |
+
}
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"29": {
|
| 432 |
+
"title": "Cleanrl: High-quality single-file implementations of deep reinforcement learning algorithms.",
|
| 433 |
+
"author": "Shengyi Huang, Rousslan Fernand Julien Dossa, Chang Ye, Jeff Braga, Dipam Chakraborty, Kinal Mehta, and Jo\u00e3o G.M. Ara\u00fajo.",
|
| 434 |
+
"venue": "Journal of Machine Learning Research, 23(274):1\u201318, 2022.",
|
| 435 |
+
"url": null
|
| 436 |
+
}
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"30": {
|
| 440 |
+
"title": "CVXPY: A Python-embedded modeling language for convex optimization.",
|
| 441 |
+
"author": "Steven Diamond and Stephen Boyd.",
|
| 442 |
+
"venue": "Journal of Machine Learning Research, 17(83):1\u20135, 2016.",
|
| 443 |
+
"url": null
|
| 444 |
+
}
|
| 445 |
+
}
|
| 446 |
+
],
|
| 447 |
+
"url": "http://arxiv.org/html/2408.01639v2"
|
| 448 |
+
}
|
20241217/2408.02960v2.json
ADDED
|
@@ -0,0 +1,403 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Anytime Multi-Agent Path Finding with an Adaptive Delay-Based Heuristic",
|
| 3 |
+
"abstract": "Anytime multi-agent path finding (MAPF) is a promising approach to scalable and collision-free path optimization in multi-agent systems. MAPF-LNS, based on Large Neighborhood Search (LNS), is the current state-of-the-art approach where a fast initial solution is iteratively optimized by destroying and repairing selected paths of the solution. Current MAPF-LNS variants commonly use an adaptive selection mechanism to choose among multiple destroy heuristics. However, to determine promising destroy heuristics, MAPF-LNS requires a considerable amount of exploration time. As common destroy heuristics are stationary, i.e., non-adaptive, any performance bottleneck caused by them cannot be overcome by adaptive heuristic selection alone, thus limiting the overall effectiveness of MAPF-LNS.\nIn this paper, we propose Adaptive Delay-based Destroy-and-Repair Enhanced with Success-based Self-learning (ADDRESS) as a single-destroy-heuristic variant of MAPF-LNS. ADDRESS applies restricted Thompson Sampling to the top- set of the most delayed agents to select a seed agent for adaptive LNS neighborhood generation. We evaluate ADDRESS in multiple maps from the MAPF benchmark set and demonstrate cost improvements by at least 50% in large-scale scenarios with up to a thousand agents, compared with the original MAPF-LNS and other state-of-the-art methods.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "A wide range of real-world applications like goods transportation in warehouses, search and rescue missions, and traffic management can be formulated as Multi-Agent Path Finding (MAPF) problem, where the goal is to find collision-free paths for multiple agents with each having an assigned start and goal location. Finding optimal solutions, w.r.t. minimal flowtime or makespan is NP-hard, which limits scalability of most state-of-the-art MAPF solvers (Ratner and Warmuth 1986 ###reference_b21###; Sharon et al. 2012 ###reference_b25###; Yu and LaValle 2013 ###reference_b30###).\nAnytime MAPF based on Large Neighborhood Search (LNS) is a promising approach to finding fast and high-quality solutions to the MAPF problem within a fixed time budget (Li et al. 2021 ###reference_b14###). Given an initial feasible solution and a set of destroy heuristics, LNS iteratively destroys and replans a fixed number of paths, according to an agent neighborhood, until the time budget runs out. MAPF-LNS represents the current state-of-the-art in anytime MAPF and has been experimentally shown to scale up to scenarios with hundreds of agents (Li et al. 2021 ###reference_b14###). Due to its increasing popularity, several extensions have been proposed like fast local repairing, integration of primal heuristics, machine learning-guided neighborhood selection, neighborhood size adaptation, and parallelism (Chan et al. 2024 ###reference_b4###; Huang et al. 2022 ###reference_b11###; Lam et al. 2023 ###reference_b13###; Li et al. 2022 ###reference_b15###; Phan et al. 2024b ###reference_b19###).\nCurrent MAPF-LNS variants use an adaptive selection mechanism to choose from the set of destroy heuristics, as illustrated in Figure 1 ###reference_### (Ropke and Pisinger 2006 ###reference_b22###). However, to determine promising destroy heuristics, MAPF-LNS requires a considerable amount of exploration time. As common destroy heuristics are stationary, i.e., non-adaptive (Li et al. 2021 ###reference_b14###), any performance bottleneck caused by them cannot be overcome by the adaptive selection mechanism alone, thus limiting the overall effectiveness of MAPF-LNS.\n###figure_1### In this paper, we propose Adaptive Delay-based Destroy-and-Repair Enhanced with Success-based Self-learning (ADDRESS), as a single-destroy-heuristic variant of MAPF-LNS, illustrated in Figure 1 ###reference_###. ADDRESS applies restricted Thompson Sampling to the top- set of the most delayed agents to select a seed agent for adaptive LNS neighborhood generation. Our contributions are as follows:\nWe discuss a performance bottleneck of the current empirically most effective destroy heuristic in MAPF-LNS and its implications for large-scale scenarios.\nWe define an adaptive destroy heuristic, called ADDRESS heuristic, to generate neighborhoods based on the top- set of the most delayed agents, using multi-armed bandits like Thompson Sampling. We formulate a simplified variant of MAPF-LNS using only our ADDRESS heuristic, as illustrated in Figure 1 ###reference_###.\nWe evaluate ADDRESS in multiple maps from the MAPF benchmark set (Stern et al. 2019 ###reference_b27###) and demonstrate cost improvements by at least 50% in large-scale scenarios with up to a thousand agents, compared with the original MAPF-LNS and other state-of-the-art methods.\nWhile our paper focuses on MAPF, our ADDRESS heuristic can also be applied to other problem classes, where variables can be sorted by their cost contribution to generate LNS neighborhoods (Pisinger and Ropke 2019 ###reference_b20###)."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Background",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Multi-Agent Path Finding (MAPF)",
|
| 21 |
+
"text": "We focus on maps as undirected unweighted graphs , where vertex set contains all possible locations and edge set contains all possible transitions or movements between adjacent locations. An instance consists of a map and a set of agents with each agent having a start location and a goal location . At every time step , all agents can move along the edges in or wait at their current location (Stern et al. 2019 ###reference_b27###).\nMAPF aims to find a collision-free plan for all agents. A plan consists of individual paths per agent , where , , , and is the length or travel distance of path . The delay of path is defined by the difference of path length and the length of the shortest path from to w.r.t. map .\nIn this paper, we consider vertex conflicts that occur when two agents and occupy the same location at time step and edge conflicts that occur when two agents and traverse the same edge in opposite directions at time step (Stern et al. 2019 ###reference_b27###). A plan is a solution, i.e., feasible when it does not have any vertex or edge conflicts. Our goal is to find a feasible solution by minimizing the flowtime equivalent to minimizing the sum of delays or (total) cost . In the context of anytime MAPF, we also consider the Area Under the Curve (AUC) as a measure of how quickly we approach the quality of our final solution."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Anytime MAPF with LNS",
|
| 27 |
+
"text": "Anytime MAPF searches for solutions within a given time budget. The solution quality monotonically improves with increasing time budget (Cohen et al. 2018 ###reference_b8###; Li et al. 2021 ###reference_b14###).\nMAPF-LNS based on Large Neighborhood Search (LNS) is the current state-of-the-art approach to anytime MAPF and shown to scale up to large-scale scenarios with hundreds of agents (Huang et al. 2022 ###reference_b11###; Li et al. 2021 ###reference_b14###). Starting with an initial feasible plan , e.g., found via prioritized planning (PP) from (Silver 2005 ###reference_b26###), MAPF-LNS iteratively modifies by destroying paths of the neighborhood . The destroyed paths are then repaired or replanned using PP to quickly generate new paths . If the new cost is lower than the previous cost , then is replaced by , and the search continues until the time budget runs out. The result of MAPF-LNS is the last accepted solution , with the lowest cost so far.\nMAPF-LNS uses a set of three destroy heuristics, namely a random uniform selection of agents, an agent-based heuristic, and a map-based heuristic (Li et al. 2021 ###reference_b14###). The agent-based heuristic generates a neighborhood, including a seed agent with the current maximum delay and other agents, determined via random walks, that prevent from achieving a lower delay. The map-based heuristic randomly chooses a vertex with a degree greater than 2 and generates a neighborhood of agents moving around . All heuristics are randomized but stationary since they do not adapt their rules and degree of randomization, i.e., the distributions, based on prior improvements made to the solution.\nThe original MAPF-LNS uses an adaptive selection mechanism by maintaining selection weights to choose destroy heuristics (Li et al. 2021 ###reference_b14###; Ropke and Pisinger 2006 ###reference_b22###)."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Multi-Armed Bandits",
|
| 33 |
+
"text": "Multi-armed bandits (MABs) or simply bandits are fundamental decision-making problems, where an MAB or selection algorithm repeatedly chooses an arm among a given set of arms or options to maximize an expected reward of a stochastic reward function , where is a random variable with an unknown distribution (Auer, Cesa-Bianchi, and Fischer 2002 ###reference_b2###). To solve an MAB, one has to determine an optimal arm , which maximizes the expected reward . The MAB algorithm has to balance between exploring all arms to accurately estimate and exploiting its knowledge by greedily selecting the arm with the currently highest estimate of . This is known as the exploration-exploitation dilemma, where exploration can find arms with higher rewards but requires more time for trying them out, while exploitation can lead to fast convergence but possibly gets stuck in a poor local optimum. We will focus on Thompson Sampling and -Greedy as MAB algorithms and explain them in Section 4.2 ###reference_###."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Related Work",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Multi-Armed Bandits for LNS",
|
| 45 |
+
"text": "In recent years, MABs have been used to tune learning and optimization algorithms on the fly (Badia et al. 2020 ###reference_b3###; Hendel 2022 ###reference_b9###; Schaul et al. 2019 ###reference_b24###). UCB1 and -Greedy are commonly used for LNS destroy heuristic selection in traveling salesman problems (TSP), mixed integer linear programming (MILP), and vehicle routing problems (VRP) (Chen et al. 2016 ###reference_b6###; Hendel 2022 ###reference_b9###). In most cases, a heavily engineered reward function with several weighted terms is used for training the MAB. Recently, a MAPF-LNS variant, called BALANCE, has been proposed to adapt the neighborhood size along with the destroy heuristic choice using a bi-level Thompson Sampling approach (Phan et al. 2024b ###reference_b19###).\nInstead of adapting the destroy heuristic selection, we propose a single adaptive destroy heuristic, thus simplifying the high-level MAPF-LNS procedure (Figure 1 ###reference_###). Our destroy heuristic uses restricted Thompson Sampling with simple binary rewards to select a seed agent from the top- set of the most delayed agents for LNS neighborhood generation, which can also be applied to other problem classes, such as TSP, MILP, or VRP (Pisinger and Ropke 2019 ###reference_b20###)."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Machine Learning in Anytime MAPF",
|
| 51 |
+
"text": "Machine learning has been used in MAPF to directly learn collision-free path finding, to guide the node selection in search trees, or to select appropriate MAPF algorithms for certain maps (Alkazzi and Okumura 2024 ###reference_b1###; Huang, Dilkina, and Koenig 2021 ###reference_b10###; Kaduri, Boyarski, and Stern 2020 ###reference_b12###; Phan et al. 2024a ###reference_b17###, 2025 ###reference_b18###; Sartoretti et al. 2019 ###reference_b23###). (Huang et al. 2022 ###reference_b11###; Yan and Wu 2024 ###reference_b29###) propose machine learning-guided variants of MAPF-LNS, where neighborhoods are generated by stationary procedures, e.g., the destroy heuristics of (Li et al. 2021 ###reference_b14###). The neighborhoods are then selected via an offline trained model. Such methods cannot adapt during the search and require extensive prior efforts like data acquisition, model training, and feature engineering.\nWe focus on adaptive approaches to MAPF-LNS using online learning via MABs. Our destroy heuristic can adjust on the fly via binary reward signals, indicating a successful or failed improvement of the solution quality. The rewards are directly obtained from the LNS without any prior data acquisition or expensive feature engineering."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Adaptive Delay-Based MAPF-LNS",
|
| 57 |
+
"text": "We now introduce Adaptive Delay-based Destroy-and-Repair Enhanced with Success-based Self-learning (ADDRESS) as a simplified yet effective variant of MAPF-LNS."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Original Agent-Based Destroy Heuristic",
|
| 63 |
+
"text": "Our adaptive destroy heuristic is inspired by the agent-based heuristic of (Li et al. 2021 ###reference_b14###), which is empirically confirmed to be the most effective standalone heuristic in most maps (Li et al. 2021 ###reference_b14###; Phan et al. 2024b ###reference_b19###).\nThe idea is to select a seed agent , whose path has a high potential to be shortened, indicated by its delay . A random walk is performed from a random position in to collect other agents whose paths are crossed by the random walk, indicating their contribution to the delay , to generate a neighborhood of size for LNS destroy-and-repair.\nThe original destroy heuristic of (Li et al. 2021 ###reference_b14###) greedily selects the seed agent with the maximum delay . To avoid repeated selection of the same agent, the original heuristic maintains a tabu list, which is emptied when all agents have been selected or when the current seed agent has no delay, i.e., . Therefore, the heuristic has to iterate over all agents in the worst case, which is time-consuming for large-scale scenarios with many agents, introducing a potential performance bottleneck. The original MAPF-LNS cannot overcome this bottleneck because it only adapts the high-level heuristic selection via , as shown in Figure 1 ###reference_###, and thus can only switch to other (less effective) destroy heuristics as an alternative."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "ADDRESS Destroy Heuristic",
|
| 69 |
+
"text": "Our goal is to overcome the limitation of the original agent-based destroy heuristic, and consequently of MAPF-LNS, using MABs. We model each agent as an arm and maintain two counters per agent, namely for successful cost improvements, and for failed cost improvements. Both counters represent the parameters of a Beta distribution , which estimates the potential of an agent to improve the solution as a seed agent. has a mean of and is initialized with and , corresponding to an initial 50:50 chance estimate that an agent could improve the solution if selected as a seed agent (Chapelle and Li 2011 ###reference_b5###).\nSince the number of agents can be large, a naive MAB would need to explore an enormous arm space, which poses a similar bottleneck as the tabu list approach of the original agent-based heuristic (Section 4.1 ###reference_###).\nThus, we restrict the agent selection to the top- set of the most delayed agents with to ease exploration.\nThe simplest MAB is -Greedy, which selects a random seed agent with a probability of , and the agent with the highest expected success rate with the complementary probability of .\nWe propose a restricted Thompson Sampling approach to select a seed agent from . For each agent within the top- set, we sample an estimate of the solution improvement rate and select the agent with the highest sampled estimate . Thompson Sampling is a Bayesian approach with being the prior distribution of the improvement success rate and with updated parameters and being the posterior distribution (Chapelle and Li 2011 ###reference_b5###; Thompson 1933 ###reference_b28###).\nOur destroy heuristic, called ADDRESS heuristic, first sorts all agents w.r.t. their delays to determine the top- set of the most delayed agents. Restricted Thompson Sampling is then applied to the parameters and of all agents to select a seed agent . An LNS neighborhood is generated via random walks, according to (Li et al. 2021 ###reference_b14###), by adding agents whose paths are crossed by the random walk. Note that these agents are not necessarily part of the top- set .\nThe full formulation of our ADDRESS heuristic with Thompson Sampling is provided in Algorithm 1 ###reference_###, where represents the instance to be solved, represents the current solution, restricts the seed agent selection, and represent the parameters for the corresponding Beta distributions per agent for Thompson Sampling."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.3",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "ADDRESS Formulation",
|
| 75 |
+
"text": "We now integrate our ADDRESS heuristic into the MAPF-LNS algorithm (Li et al. 2021 ###reference_b14###). For a more focused search, we propose a simplified variant, called ADDRESS, which only uses our adaptive destroy heuristic instead of determining a promising stationary heuristic via time-consuming exploration, as illustrated in Figure 1 ###reference_###.\nADDRESS iteratively invokes our proposed destroy heuristic of Algorithm 1 ###reference_### with the parameters to select a seed agent and generate an LNS neighborhood using the random walk procedure of the original MAPF-LNS (Li et al. 2021 ###reference_b14###). Afterward, the standard destroy-and-repair operations of MAPF-LNS are performed on the neighborhood to produce a new solution . If the new solution has a lower cost than the previous solution , then is incremented and is replaced by . Otherwise, is incremented. The whole procedure is illustrated in Figure 2 ###reference_###.\nThe full formulation of ADDRESS is provided in Algorithm 2 ###reference_###, where represents the instance to be solved and restricts the seed agent selection. The parameters are all initialized with 1 as a uniform prior.\n###figure_2###"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.4",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Conceptual Discussion",
|
| 81 |
+
"text": "ADDRESS is a simple and adaptive approach to scalable anytime MAPF. The adaptation is controlled by the learnable parameters and per agent , and the top- ranking of potential seed agents. Our ADDRESS heuristic can significantly improve MAPF-LNS, overcoming the performance bottleneck of the original agent-based heuristic of (Li et al. 2021 ###reference_b14###) by selecting seed agents via MABs instead of greedily, and restricting the selection to the top- set of the most delayed agents to ease exploration.\nThe parameters and enable the seed agent selection via Thompson Sampling, which considers the improvement success rate under uncertainty via Bayesian inference (Thompson 1933 ###reference_b28###). Unlike prior MAB-enhanced LNS approaches, ADDRESS only uses binary rewards denoting success or failure, thus greatly simplifying our approach compared to alternative MAB approaches (Chen et al. 2016 ###reference_b6###; Chmiela et al. 2023 ###reference_b7###; Hendel 2022 ###reference_b9###; Phan et al. 2024b ###reference_b19###).\nThe top- set enables efficient learning by reducing the number of options for Thompson Sampling, which otherwise would require exhaustive exploration of all agents . The top- set supports fast adaptation by filtering out seed agent candidates whose paths were significantly shortened earlier. While the top- ranking causes some overhead due to sorting agents, our experiments in Section 5 ###reference_### suggest that the sorting overhead is outweighed by the performance gains regarding cost and AUC in large-scale scenarios.\nOur single-destroy-heuristic approach enables a more focused search toward high-quality solutions without time-consuming exploration of stationary (and less effective) destroy heuristics. Due to its simplicity, our ADDRESS heuristic can be easily applied to other problem classes, such as TSP, MILP, or VRP, when using so-called worst or critical destroy heuristics, focusing on high-cost variables that \u201cspoil\u201d the structure of the solution (Pisinger and Ropke 2019 ###reference_b20###). We defer such applications to future work."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Experiments111Code is provided at https://github.com/JimyZ13/ADDRESS.",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.1",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Experiment \u2013 Choice of",
|
| 93 |
+
"text": "We run ADDRESS with Thompson Sampling and -Greedy to evaluate different choices of on the Den520d and City map with agents and a time budget of 60 seconds. The results are compared with MAPF-LNS using only the agent-based heuristic of (Li et al. 2021 ###reference_b14###), as a stationary variant.\nThe results are shown in Figure 3 ###reference_###. ADDRESS with Thompson Sampling always performs best when . However, ADDRESS is more sensitive to when using -Greedy, which only outperforms the original agent-based heuristic, when . In all our test maps, both ADDRESS variants work best when .\n###figure_3### The results indicate that both ADDRESS variants with either Thompson Sampling or -Greedy can notably outperform the original agent-based heuristic of MAPF-LNS with sufficient restriction via . Thompson Sampling is more robust regarding the choice of ."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.2",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Experiment \u2013 Delay-Based Heuristics",
|
| 99 |
+
"text": "Next, we evaluate the search progress of ADDRESS with Thompson Sampling and -Greedy for different time budgets on the Den520d and City map with agents. The results are compared with MAPF-LNS using only the agent-based heuristic, as a stationary variant.\nThe results are shown in Figure 4 ###reference_###. Both ADDRESS variants outperform the agent-based MAPF-LNS by always achieving lower sums of delays and AUC values, which indicate that ADDRESS always improves faster than the original agent-based heuristic. Thompson Sampling always performs at least as well as -Greedy.\n###figure_4### The results demonstrate the potential of both ADDRESS variants to improve MAPF-LNS over the original agent-based heuristic for any time budget w.r.t. solution cost and speed of cost improvement. This confirms that the combination of MABs and the top- set can overcome the performance bottleneck of the original agent-based heuristic (Section 4.1 ###reference_###) with negligible overhead (Section 4.4 ###reference_###)."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.3",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "Experiment \u2013 ADDRESS and MAPF-LNS",
|
| 105 |
+
"text": "We compare ADDRESS with the original MAPF-LNS using all stationary destroy heuristics of (Li et al. 2021 ###reference_b14###), as described in Section 2.2 ###reference_###, for different time budgets on the Den520d, Warehouse, and City map with agents. To evaluate the dominance of our ADDRESS heuristic over all stationary heuristics, we introduce a MAPF-LNS variant including all commonly used destroy heuristics, as well as our own.\nThe results are shown in Figure 5 ###reference_###. ADDRESS outperforms both MAPF-LNS variants. The MAPF-LNS variant with our ADDRESS heuristic performs second best in Den520d and generally in the other maps with a maximum time budget of 30 seconds. Using our ADDRESS heuristics always leads to a lower average AUC when the time budget is lower than 120 seconds. The selection weights of MAPF-LNS indicate that our ADDRESS heuristic is the dominant destroy heuristic, as it is quickly preferred over all other heuristics.\n###figure_5### ###figure_6### The results confirm that our ADDRESS heuristic is more effective than the other heuristics in large-scale scenarios with agents (Li et al. 2021 ###reference_b14###), as it is consistently preferred by the original MAPF-LNS within less than 10 seconds of runtime. MAPF-LNS, with our ADDRESS heuristic, generally underperforms ADDRESS since it additionally explores the less effective destroy heuristics, whereas ADDRESS directly optimizes the seed agent selection for LNS neighborhood generation."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.4",
|
| 109 |
+
"parent_section_id": "5",
|
| 110 |
+
"section_name": "Experiment \u2013 State-of-the-Art Comparison",
|
| 111 |
+
"text": "Finally, we compare ADDRESS with the original MAPF-LNS, MAPF-LNS2 (which finds feasible solutions by minimizing collisions), BALANCE, and LaCAM*. We run all algorithms on the Random, Ost003d, Den520d, Warehouse, and City maps with different numbers of agents and a time budget of 60 seconds.\nThe results with ADDRESS, MAPF-LNS, MAPF-LNS2, and BALANCE are shown in Figure 6 ###reference_###. ADDRESS significantly outperforms all other approaches except in Random. BALANCE slightly outperforms MAPF-LNS and MAPF-LNS2 in Den520d and Warehouse with . Due to the large performance gap, we report the sum of delays of LaCAM* and ADDRESS separately in Table 1 ###reference_### for the maximum number of agents per map tried in this experiment. ADDRESS (and all other baselines) clearly outperforms LaCAM*.\nThe experiment demonstrates the ability of ADDRESS to outperform the state-of-the-art in large-scale scenarios with up to a thousand agents like in the Warehouse or City map. The high-level simplification of MAPF-LNS allows ADDRESS to focus its runtime on optimizing seed agents for neighborhood generation without (1) exploring less effective destroy heuristics or (2) iterating through the whole agent set , unlike the original agent-based destroy heuristic, used in MAPF-LNS and BALANCE. However, ADDRESS does not outperform the baselines in smaller scenarios, e.g., in the Random map. In this case, the overhead caused by agent sorting and Thompson Sampling outweighs the benefits of ADDRESS. In contrast, MAPF-LNS and BALANCE resort to the map-based heuristic, which is the dominant heuristic in the Random map (Li et al. 2021 ###reference_b14###)."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Conclusion",
|
| 117 |
+
"text": "We presented ADDRESS as a single-destroy-heuristic variant of MAPF-LNS. ADDRESS applies restricted Thompson Sampling to the top- set of the most delayed agents to select a seed agent for adaptive LNS neighborhood generation. Therefore, ADDRESS avoids time-consuming exploration of several stationary destroy heuristics.\nOur experiments show that ADDRESS significantly outperforms state-of-the-art anytime MAPF algorithms like the original MAPF-LNS, MAPF-LNS2, BALANCE, and LaCAM* in large-scale scenarios with up to a thousand agents. The effectiveness of our destroy heuristic is confirmed by its lower costs and AUC compared with the original agent-based destroy heuristic in MAPF and the strong preference by the original MAPF-LNS over all other commonly used destroy heuristics. The combination of Thompson Sampling and the top- ranking of the most delayed agents enables efficient learning and a stronger focus on promising seed agent candidates through fast adaptation and filtering of agents whose paths were significantly shortened over time. ADDRESS with -Greedy can also outperform state-of-the-art anytime MAPF with slightly weaker performance than Thompson Sampling, indicating that other MAB algorithms could be used, which we want to investigate in the future.\nMore future work includes the abstraction of agents and the application of our ADDRESS heuristic to other problem classes, such as TSP, MILP, or VRP, where variables can be sorted by their cost contribution to generate neighborhoods."
|
| 118 |
+
}
|
| 119 |
+
],
|
| 120 |
+
"appendix": [],
|
| 121 |
+
"tables": {
|
| 122 |
+
"1": {
|
| 123 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Average sum of delays of ADDRESS and LaCAM* with 95% confidence intervals with a time budget of 60 seconds and the maximum number of agents per map evaluated in Figure <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.02960v2#S5.F6\" title=\"Figure 6 \u2023 Results \u2023 5.3 Experiment \u2013 ADDRESS and MAPF-LNS \u2023 5 Experiments11footnote 1Code is provided at https://github.com/JimyZ13/ADDRESS. \u2023 Anytime Multi-Agent Path Finding with an Adaptive Delay-Based Heuristic\"><span class=\"ltx_text ltx_ref_tag\">6</span></a>. The best performance is highlighted in boldface.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.10.11.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S5.T1.10.11.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.10.11.1.2\">ADDRESS</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.10.11.1.3\">LaCAM*</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_tt\" id=\"S5.T1.2.2.3\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T1.2.2.3.1\">Random</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S5.T1.4.4.3\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T1.4.4.3.1\">Ost003d</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S5.T1.6.6.3\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T1.6.6.3.1\">Den520d</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S5.T1.8.8.3\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T1.8.8.3.1\">Warehouse</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_rr ltx_border_t\" id=\"S5.T1.10.10.3\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T1.10.10.3.1\">City</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.10.10.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 124 |
+
"capture": "Table 1: Average sum of delays of ADDRESS and LaCAM* with 95% confidence intervals with a time budget of 60 seconds and the maximum number of agents per map evaluated in Figure 6. The best performance is highlighted in boldface."
|
| 125 |
+
}
|
| 126 |
+
},
|
| 127 |
+
"image_paths": {
|
| 128 |
+
"1": {
|
| 129 |
+
"figure_path": "2408.02960v2_figure_1.png",
|
| 130 |
+
"caption": "Figure 1: Scheme of our contribution. Instead of using an adaptive selection mechanism \u03c0\ud835\udf0b\\piitalic_\u03c0 to choose among multiple stationary destroy heuristics Hxsubscript\ud835\udc3b\ud835\udc65H_{x}italic_H start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT (Li et al. 2021), ADDRESS (our approach) only uses a single adaptive heuristic.",
|
| 131 |
+
"url": "http://arxiv.org/html/2408.02960v2/x1.png"
|
| 132 |
+
},
|
| 133 |
+
"2": {
|
| 134 |
+
"figure_path": "2408.02960v2_figure_2.png",
|
| 135 |
+
"caption": "Figure 2: Detailed overview of ADDRESS. For each agent ai\u2208\ud835\udc9csubscript\ud835\udc4e\ud835\udc56\ud835\udc9ca_{i}\\in\\mathcal{A}italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 caligraphic_A, we maintain two parameters \u03b1i,\u03b2i>0subscript\ud835\udefc\ud835\udc56subscript\ud835\udefd\ud835\udc560\\alpha_{i},\\beta_{i}>0italic_\u03b1 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT > 0. At each LNS iteration, all agents are sorted w.r.t. to their delays. A restricted Thompson Sampling approach is applied to the top-K\ud835\udc3eKitalic_K set of the most delayed agents, according to their samples qi\u223cBeta\u2062(\u03b1i,\u03b2i)similar-tosubscript\ud835\udc5e\ud835\udc56Betasubscript\ud835\udefc\ud835\udc56subscript\ud835\udefd\ud835\udc56q_{i}\\sim\\textit{Beta}(\\alpha_{i},\\beta_{i})italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u223c Beta ( italic_\u03b1 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ), to choose a seed agent index j\ud835\udc57jitalic_j. The path of the seed agent ajsubscript\ud835\udc4e\ud835\udc57a_{j}italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is used to generate an LNS neighborhood AN\u2282\ud835\udc9csubscript\ud835\udc34\ud835\udc41\ud835\udc9cA_{N}\\subset\\mathcal{A}italic_A start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT \u2282 caligraphic_A via random walks. After running the LNS destroy-and-repair operations on ANsubscript\ud835\udc34\ud835\udc41A_{N}italic_A start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT, the parameters \u03b1jsubscript\ud835\udefc\ud835\udc57\\alpha_{j}italic_\u03b1 start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT or \u03b2jsubscript\ud835\udefd\ud835\udc57\\beta_{j}italic_\u03b2 start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT of the seed agent ajsubscript\ud835\udc4e\ud835\udc57a_{j}italic_a start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are updated, depending on the cost improvement of the new solution.",
|
| 136 |
+
"url": "http://arxiv.org/html/2408.02960v2/x2.png"
|
| 137 |
+
},
|
| 138 |
+
"3": {
|
| 139 |
+
"figure_path": "2408.02960v2_figure_3.png",
|
| 140 |
+
"caption": "Figure 3: Sum of delays for ADDRESS (using \u03f5italic-\u03f5\\epsilonitalic_\u03f5-greedy or Thompson Sampling) compared with MAPF-LNS (using only the agent-based heuristic) for different numbers of options K\ud835\udc3eKitalic_K with m=700\ud835\udc5a700m=700italic_m = 700 agents in both maps, a time budget of 60 seconds, and \u03f5=12italic-\u03f512\\epsilon=\\frac{1}{2}italic_\u03f5 = divide start_ARG 1 end_ARG start_ARG 2 end_ARG.",
|
| 141 |
+
"url": "http://arxiv.org/html/2408.02960v2/x3.png"
|
| 142 |
+
},
|
| 143 |
+
"4": {
|
| 144 |
+
"figure_path": "2408.02960v2_figure_4.png",
|
| 145 |
+
"caption": "Figure 4: Sum of delays and AUC for ADDRESS (using \u03f5italic-\u03f5\\epsilonitalic_\u03f5-greedy or Thompson Sampling) compared with MAPF-LNS (using only the agent-based heuristic) for different time budgets (starting from 15 seconds) with m=700\ud835\udc5a700m=700italic_m = 700 agents in both maps and \u03f5=12italic-\u03f512\\epsilon=\\frac{1}{2}italic_\u03f5 = divide start_ARG 1 end_ARG start_ARG 2 end_ARG. Shaded areas show the 95% confidence interval.",
|
| 146 |
+
"url": "http://arxiv.org/html/2408.02960v2/x4.png"
|
| 147 |
+
},
|
| 148 |
+
"5": {
|
| 149 |
+
"figure_path": "2408.02960v2_figure_5.png",
|
| 150 |
+
"caption": "Figure 5: Sum of delays (left) and AUC (middle) for ADDRESS compared with the original MAPF-LNS (with and without our ADDRESS heuristic) for different time budgets (starting from 15 seconds) with m=700\ud835\udc5a700m=700italic_m = 700 agents in all maps. Shaded areas show the 95% confidence interval. Right: Evolution of the selection weights of MAPF-LNS with our ADDRESS heuristic over time.",
|
| 151 |
+
"url": "http://arxiv.org/html/2408.02960v2/x5.png"
|
| 152 |
+
},
|
| 153 |
+
"6": {
|
| 154 |
+
"figure_path": "2408.02960v2_figure_6.png",
|
| 155 |
+
"caption": "Figure 6: Sum of delays for ADDRESS compared with the original MAPF-LNS (without our ADDRESS heuristic), MAPF-LNS2, and BALANCE for different numbers of agents m\ud835\udc5amitalic_m and a time budget of 60 seconds. Shaded areas show the 95% confidence interval. The legend at the top applies across all plots. A comparison with LaCAM* is shown in Table 1. |\ud835\udcb1|\ud835\udcb1|\\mathcal{V}|| caligraphic_V | denotes the corresponding map size, i.e., the number of occupiable locations.",
|
| 156 |
+
"url": "http://arxiv.org/html/2408.02960v2/x6.png"
|
| 157 |
+
}
|
| 158 |
+
},
|
| 159 |
+
"validation": true,
|
| 160 |
+
"references": [
|
| 161 |
+
{
|
| 162 |
+
"1": {
|
| 163 |
+
"title": "A Comprehensive Review on Leveraging Machine Learning for\nMulti-Agent Path Finding.",
|
| 164 |
+
"author": "Alkazzi, J.-M.; and Okumura, K. 2024.",
|
| 165 |
+
"venue": "IEEE Access.",
|
| 166 |
+
"url": null
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"2": {
|
| 171 |
+
"title": "Finite-Time Analysis of the Multiarmed Bandit Problem.",
|
| 172 |
+
"author": "Auer, P.; Cesa-Bianchi, N.; and Fischer, P. 2002.",
|
| 173 |
+
"venue": "Machine learning, 47(2-3): 235\u2013256.",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"3": {
|
| 179 |
+
"title": "Agent57: Outperforming the Atari Human Benchmark.",
|
| 180 |
+
"author": "Badia, A. P.; Piot, B.; Kapturowski, S.; Sprechmann, P.; Vitvitskyi, A.; Guo,\nZ. D.; and Blundell, C. 2020.",
|
| 181 |
+
"venue": "In International conference on machine learning, 507\u2013517.\nPMLR.",
|
| 182 |
+
"url": null
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"4": {
|
| 187 |
+
"title": "Anytime Multi-Agent Path Finding using Operation\nParallelism in Large Neighborhood Search.",
|
| 188 |
+
"author": "Chan, S.-H.; Chen, Z.; Lin, D.-L.; Zhang, Y.; Harabor, D.; Koenig, S.; Huang,\nT.-W.; and Phan, T. 2024.",
|
| 189 |
+
"venue": "In Proceedings of the 23rd International Conference on\nAutonomous Agents and Multiagent Systems, 2183\u20132185.",
|
| 190 |
+
"url": null
|
| 191 |
+
}
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"5": {
|
| 195 |
+
"title": "An Empirical Evaluation of Thompson Sampling.",
|
| 196 |
+
"author": "Chapelle, O.; and Li, L. 2011.",
|
| 197 |
+
"venue": "In Advances in neural information processing systems,\n2249\u20132257.",
|
| 198 |
+
"url": null
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"6": {
|
| 203 |
+
"title": "A Multi-Arm Bandit Neighbourhood Search for Routing and\nScheduling Problems.",
|
| 204 |
+
"author": "Chen, Y.; Cowling, P. I.; Polack, F. A. C.; and Mourdjis, P. 2016.",
|
| 205 |
+
"venue": null,
|
| 206 |
+
"url": null
|
| 207 |
+
}
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"7": {
|
| 211 |
+
"title": "Online Learning for Scheduling MIP Heuristics.",
|
| 212 |
+
"author": "Chmiela, A.; Gleixner, A.; Lichocki, P.; and Pokutta, S. 2023.",
|
| 213 |
+
"venue": "In International Conference on Integration of Constraint\nProgramming, Artificial Intelligence, and Operations Research, 114\u2013123.\nSpringer.",
|
| 214 |
+
"url": null
|
| 215 |
+
}
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"8": {
|
| 219 |
+
"title": "Anytime Focal Search with Applications.",
|
| 220 |
+
"author": "Cohen, L.; Greco, M.; Ma, H.; Hern\u00e1ndez, C.; Felner, A.; Kumar, T. S.; and\nKoenig, S. 2018.",
|
| 221 |
+
"venue": "In IJCAI, 1434\u20131441.",
|
| 222 |
+
"url": null
|
| 223 |
+
}
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"9": {
|
| 227 |
+
"title": "Adaptive Large Neighborhood Search for Mixed Integer\nProgramming.",
|
| 228 |
+
"author": "Hendel, G. 2022.",
|
| 229 |
+
"venue": "Mathematical Programming Computation, 1\u201337.",
|
| 230 |
+
"url": null
|
| 231 |
+
}
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"10": {
|
| 235 |
+
"title": "Learning Node-Selection Strategies in Bounded Suboptimal\nConflict-Based Search for Multi-Agent Path Finding.",
|
| 236 |
+
"author": "Huang, T.; Dilkina, B.; and Koenig, S. 2021.",
|
| 237 |
+
"venue": "In International Joint Conference on Autonomous Agents and\nMultiagent Systems (AAMAS).",
|
| 238 |
+
"url": null
|
| 239 |
+
}
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"11": {
|
| 243 |
+
"title": "Anytime Multi-Agent Path Finding via Machine\nLearning-Guided Large Neighborhood Search.",
|
| 244 |
+
"author": "Huang, T.; Li, J.; Koenig, S.; and Dilkina, B. 2022.",
|
| 245 |
+
"venue": "In Proceedings of the 36th AAAI Conference on Artificial\nIntelligence (AAAI), 9368\u20139376.",
|
| 246 |
+
"url": null
|
| 247 |
+
}
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"12": {
|
| 251 |
+
"title": "Algorithm Selection for Optimal Multi-Agent Pathfinding.",
|
| 252 |
+
"author": "Kaduri, O.; Boyarski, E.; and Stern, R. 2020.",
|
| 253 |
+
"venue": "In Proceedings of the International Conference on Automated\nPlanning and Scheduling, volume 30, 161\u2013165.",
|
| 254 |
+
"url": null
|
| 255 |
+
}
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"13": {
|
| 259 |
+
"title": "Exact Anytime Multi-Agent Path Finding Using\nBranch-and-Cut-and-Price and Large Neighborhood Search.",
|
| 260 |
+
"author": "Lam, E.; Harabor, D.; Stuckey, P. J.; and Li, J. 2023.",
|
| 261 |
+
"venue": "In Proceedings of the International Conference on Automated\nPlanning and Scheduling (ICAPS).",
|
| 262 |
+
"url": null
|
| 263 |
+
}
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"14": {
|
| 267 |
+
"title": "Anytime Multi-Agent Path Finding via Large Neighborhood\nSearch.",
|
| 268 |
+
"author": "Li, J.; Chen, Z.; Harabor, D.; Stuckey, P. J.; and Koenig, S. 2021.",
|
| 269 |
+
"venue": "In Proceedings of the International Joint Conference on\nArtificial Intelligence (IJCAI), 4127\u20134135.",
|
| 270 |
+
"url": null
|
| 271 |
+
}
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"15": {
|
| 275 |
+
"title": "MAPF-LNS2: Fast Repairing for Multi-Agent Path\nFinding via Large Neighborhood Search.",
|
| 276 |
+
"author": "Li, J.; Chen, Z.; Harabor, D.; Stuckey, P. J.; and Koenig, S. 2022.",
|
| 277 |
+
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\n36(9): 10256\u201310265.",
|
| 278 |
+
"url": null
|
| 279 |
+
}
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"16": {
|
| 283 |
+
"title": "Improving LaCAM for Scalable Eventually Optimal\nMulti-Agent Pathfinding.",
|
| 284 |
+
"author": "Okumura, K. 2023.",
|
| 285 |
+
"venue": "In Proceedings of the International Joint Conference on\nArtificial Intelligence (IJCAI).",
|
| 286 |
+
"url": null
|
| 287 |
+
}
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"17": {
|
| 291 |
+
"title": "Confidence-Based Curriculum Learning for Multi-Agent\nPath Finding.",
|
| 292 |
+
"author": "Phan, T.; Driscoll, J.; Romberg, J.; and Koenig, S. 2024a.",
|
| 293 |
+
"venue": "In Proceedings of the 23rd International Conference on\nAutonomous Agents and Multiagent Systems, 1558\u20131566.",
|
| 294 |
+
"url": null
|
| 295 |
+
}
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"18": {
|
| 299 |
+
"title": "Confidence-Based Curricula for Multi-Agent Path Finding\nvia Reinforcement Learning.",
|
| 300 |
+
"author": "Phan, T.; Driscoll, J.; Romberg, J.; and Koenig, S. 2025.",
|
| 301 |
+
"venue": "Preprint at Research Square.",
|
| 302 |
+
"url": null
|
| 303 |
+
}
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"19": {
|
| 307 |
+
"title": "Adaptive Anytime Multi-Agent Path Finding Using\nBandit-Based Large Neighborhood Search.",
|
| 308 |
+
"author": "Phan, T.; Huang, T.; Dilkina, B.; and Koenig, S. 2024b.",
|
| 309 |
+
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence\n(AAAI), 38(16): 17514\u201317522.",
|
| 310 |
+
"url": null
|
| 311 |
+
}
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"20": {
|
| 315 |
+
"title": "Large Neighborhood Search.",
|
| 316 |
+
"author": "Pisinger, D.; and Ropke, S. 2019.",
|
| 317 |
+
"venue": "Handbook of metaheuristics, 99\u2013127.",
|
| 318 |
+
"url": null
|
| 319 |
+
}
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"21": {
|
| 323 |
+
"title": "Finding a Shortest Solution for the NxN Extension of the\n15-Puzzle is Intractable.",
|
| 324 |
+
"author": "Ratner, D.; and Warmuth, M. 1986.",
|
| 325 |
+
"venue": "In Proceedings of the Fifth AAAI National Conference on\nArtificial Intelligence, AAAI\u201986, 168\u2013172. AAAI Press.",
|
| 326 |
+
"url": null
|
| 327 |
+
}
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"22": {
|
| 331 |
+
"title": "An Adaptive Large Neighborhood Search Heuristic for the\nPickup and Delivery Problem with Time Windows.",
|
| 332 |
+
"author": "Ropke, S.; and Pisinger, D. 2006.",
|
| 333 |
+
"venue": "Transportation science, 40(4): 455\u2013472.",
|
| 334 |
+
"url": null
|
| 335 |
+
}
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"23": {
|
| 339 |
+
"title": "PRIMAL: Pathfinding via Reinforcement and Imitation\nMulti-Agent Learning.",
|
| 340 |
+
"author": "Sartoretti, G.; Kerr, J.; Shi, Y.; Wagner, G.; Kumar, T. S.; Koenig, S.; and\nChoset, H. 2019.",
|
| 341 |
+
"venue": "IEEE Robotics and Automation Letters, 4(3): 2378\u20132385.",
|
| 342 |
+
"url": null
|
| 343 |
+
}
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"24": {
|
| 347 |
+
"title": "Adapting Behaviour for Learning Progress.",
|
| 348 |
+
"author": "Schaul, T.; Borsa, D.; Ding, D.; Szepesvari, D.; Ostrovski, G.; Dabney, W.; and\nOsindero, S. 2019.",
|
| 349 |
+
"venue": "arXiv preprint arXiv:1912.06910.",
|
| 350 |
+
"url": null
|
| 351 |
+
}
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"25": {
|
| 355 |
+
"title": "Conflict-Based Search For Optimal Multi-Agent Path\nFinding.",
|
| 356 |
+
"author": "Sharon, G.; Stern, R.; Felner, A.; and Sturtevant, N. 2012.",
|
| 357 |
+
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\n26(1): 563\u2013569.",
|
| 358 |
+
"url": null
|
| 359 |
+
}
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"26": {
|
| 363 |
+
"title": "Cooperative Pathfinding.",
|
| 364 |
+
"author": "Silver, D. 2005.",
|
| 365 |
+
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence\nand Interactive Digital Entertainment, 1(1): 117\u2013122.",
|
| 366 |
+
"url": null
|
| 367 |
+
}
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"27": {
|
| 371 |
+
"title": "Multi-Agent Pathfinding: Definitions, Variants, and\nBenchmarks.",
|
| 372 |
+
"author": "Stern, R.; Sturtevant, N.; Felner, A.; Koenig, S.; Ma, H.; Walker, T.; Li, J.;\nAtzmon, D.; Cohen, L.; Kumar, T.; et al. 2019.",
|
| 373 |
+
"venue": "In Proceedings of the International Symposium on Combinatorial\nSearch, volume 10, 151\u2013158.",
|
| 374 |
+
"url": null
|
| 375 |
+
}
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"28": {
|
| 379 |
+
"title": "On the Likelihood that One Unknown Probability exceeds\nAnother in View of the Evidence of Two Samples.",
|
| 380 |
+
"author": "Thompson, W. R. 1933.",
|
| 381 |
+
"venue": "Biometrika, 25(3/4): 285\u2013294.",
|
| 382 |
+
"url": null
|
| 383 |
+
}
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"29": {
|
| 387 |
+
"title": "Neural Neighborhood Search for Multi-Agent Path\nFinding.",
|
| 388 |
+
"author": "Yan, Z.; and Wu, C. 2024.",
|
| 389 |
+
"venue": "In The 12th International Conference on Learning\nRepresentations.",
|
| 390 |
+
"url": null
|
| 391 |
+
}
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"30": {
|
| 395 |
+
"title": "Structure and Intractability of Optimal Multi-Robot Path\nPlanning on Graphs.",
|
| 396 |
+
"author": "Yu, J.; and LaValle, S. 2013.",
|
| 397 |
+
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\n27(1): 1443\u20131449.",
|
| 398 |
+
"url": null
|
| 399 |
+
}
|
| 400 |
+
}
|
| 401 |
+
],
|
| 402 |
+
"url": "http://arxiv.org/html/2408.02960v2"
|
| 403 |
+
}
|
20241217/2408.04662v2.json
ADDED
|
@@ -0,0 +1,479 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Citekit: A Modular Toolkit for Large Language Model Citation Generation",
|
| 3 |
+
"abstract": "The emerging paradigm of enabling Large Language Models (LLMs) to generate citations in Question-Answering (QA) tasks is lacking in a unified framework to standardize and fairly compare different citation generation methods, leading to difficulties in reproduction and evaluation. Therefore, we introduce Citekit, an open-source and modular toolkit designed to facilitate the implementation and evaluation of existing citation generation methods, while also fostering the development of new approaches to improve citation quality. This tool is highly extensible, allowing users to utilize 4 main modules and 14 components to construct a pipeline, evaluating an existing method or innovative designs. Our experiments with two state-of-the-art LLMs and 11 citation generation baselines demonstrate varying strengths of different modules in answer accuracy and citation quality improvement, as well as the challenge of enhancing granularity. Based on our analysis of the effectiveness of components, we propose a new method, PEEP, obtaining a balanced answer accuracy and citation quality. Citekit is released at https://github.com/SjJ1017/Citekit.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Large Language Models (LLMs) (OpenAI, 2024 ###reference_b25###; AI@Meta, 2024 ###reference_b2###) nowadays demonstrate strong performance on Question Answering (QA) (Kamalloo et al., 2023 ###reference_b16###) on different scenarios such as Commonsense QA (Talmor et al., 2019 ###reference_b31###), long-form QA (Stelmakh et al., 2023 ###reference_b28###; Min et al., 2020 ###reference_b24###) and Multi-hop QA (Ho et al., 2020 ###reference_b9###; Yang et al., 2018 ###reference_b35###), but they can still inevitably produce hallucinated responses that are non-factual (Huang et al., 2023 ###reference_b14###), nonsensical or irrelevant to the input(Xu et al., 2024b ###reference_b33###), reflecting the ongoing challenges in ensuring factual accuracy. Given the challenges above, Retrieval Augmented Generation (RAG) (Lewis et al., 2021 ###reference_b20###), which leverages facts from external unstructured knowledge helps to enhance the reliability of LLMs and can be made more faithful and verifiable by generating citations (Gao et al., 2024 ###reference_b8###). Asking models to generate citations can improve the factual correctness of answers (Gao et al., 2023b ###reference_b7###), and the citations that link to the original references will allow readers to easily verify the source of the response, making the answers of models more verifiable and explainable. Figure 1 ###reference_### shows how citation generation can help users become more assured of the answer. In Figure 1 ###reference_###, the answer without citation inconsistently states both 1956 and 1976 as the dates for the passage of right-to-work legislation in Louisiana, leading to uncertainty about the actual timeline. If citations are included, readers can scrutinize the reference to understand clearly why there are 2 different dates to the question.\n###figure_1### Given the urgent need, ALCE (Gao et al., 2023b ###reference_b7###) developed some basic methods to enable LLMs to generate citations in QA tasks and propose metrics for evaluating the quality of citations. Following ALCE\u2019s contribution, there are other methods that either use training (Huang et al., 2024a ###reference_b11###; Li et al., 2024a ###reference_b21###; Ye et al., 2024 ###reference_b36###; Huang et al., 2024b ###reference_b12###) or construct complicated pipelines to enhance the ability of generative models in citing external documents (Zhang et al., 2024 ###reference_b37###; Sun et al., 2024 ###reference_b29###; Lee et al., 2023 ###reference_b19###; Fierro et al., 2024 ###reference_b5###; Qian et al., 2024 ###reference_b26###). Another category related to citation generation is LLM attribution (Jain et al., 2023 ###reference_b15###; Xu et al., 2024a ###reference_b32###; Gao et al., 2023a ###reference_b6###; Sun et al., 2023 ###reference_b30###; Huang et al., 2024c ###reference_b13###; Cattan et al., 2024 ###reference_b4###; Abolghasemi et al., 2024 ###reference_b1###), which refers to the capacity of an LLM to generate and provide evidence (Li et al., 2023 ###reference_b22###).\nDespite a variety of state-of-the-art methods, there are still two problems:\nChallenges in reproducibility and improvement: Different works are distinguished largely on their implementation, making it difficult to reproduce and improve. There are still some works not transparent enough, with inaccessible codes, and different works are realized using different frameworks and in different coding styles, hence the difficulty in generalization and a lack of flexibility.\nNeed for comprehensive and fair comparisons: There is a lack of comprehensive and fair horizontal comparisons between various methods. Previous works have not been examined on new LLMs, and since there are now some new metrics like citation granularity, a comprehensive evaluation is needed. Works following ALCE (Sun et al., 2024 ###reference_b29###; Lee et al., 2024 ###reference_b18###; Slobodkin et al., 2024 ###reference_b27###) only compare their methods to ALCE baselines. Others (Asai et al., 2023 ###reference_b3###; Sun et al., 2023 ###reference_b30###) that focus more on answer accuracy are not evaluated on the citation benchmark, which means a lack of horizontal comparisons between SOTAs.\nGiven the problems above, a toolkit that unifies different methods is crucial for fast workflow implementation, fair comparisons between various methods, and efficient improvement and innovation. Therefore, we present Citekit , an open-source, extensible, and user-friendly modular toolkit to construct pipelines for citation generation.\nCitekit offers four different types of modules: Input , Generation Module , Enhancing Module , and Evaluator , which are combined in a pipeline. Input contains automatic components for loading data and making prompts, and is accessible by other modules. Generation Module is for response generation, where LLM follows the instructions and uses retrieved documents to generate an answer with citations or explicit reference to the documents for further process, featuring a wide range of LLM\u2019s supported and adaptive generation modes, it can satisfy different need for various generation task. Enhancing Module contains some components that can assess, retrieve, plan, and edit, and they can be customized for different tasks and even combined into clusters. Evaluator integrates different metrics to evaluate the output of the pipeline and other new metrics could also be defined and utilized. For training-based methods, a parallel data export component can output evaluation results for supervised learning and Reinforcement Learning (RL). Citekit also provides significant versatility in customizing new modules to quickly and conveniently realize an improved method. We provide 11 baseline recipes using Citekit to comprehensively and fairly compare these SOTA methods with the latest models Llama3 and GPT-4o, showing the strength of different modules and remaining challenges. Finally, we learn from the baselines to combine the most efficient planning module, reviser, and simplifier to build our method, PEEP. We achieve a balance in both answer accuracy and citation quality. Our contributions can be summarized as follows:\nWe propose a framework that modularizes citation tasks, with four main modules decomposing citation pipelines into request wrapping, generating, enhancing, and evaluating to unify different methods. The framework contains 14 components and 16 functions to define complicated interconnections between modules and satisfy different needs for citation generation tasks.\nWe design and complete Citekit , an easy-to-use and extensible toolkit based on our framework to help reproduce, compare, and combine different methods. We pre-defined 11 recipes to cover 11 citation generation paradigms, all derived from SOTA research.\nWe conduct a comprehensive evaluation and comparison of the existing 11 baselines on 2 SOTA LLMs and propose an improved new method, PEEP, by combining effective components. Our method achieves balance in answer accuracy and citation quality, showing the convenience of verifying new methods."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "System Design",
|
| 15 |
+
"text": "###figure_2### In this section, we will introduce the design of Citekit and detail distinct functionalities of different modules mentioned in \u00a71 ###reference_###, and how they are connected to each other to form an integrated working pipeline of citation generation. We show our design in Figure 2 ###reference_###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Input",
|
| 21 |
+
"text": "Input of a citation generation pipeline or a particular module contains requested and retrieved documents.\nDocuments: In RAG, the input contains some initial documents with knowledge relevant to the question, and the designated documents will be stored for further use and evaluation, each attached automatically with a unique index to trace.\nRequest: The request of a specific module is among three options: (1) from the user\u2019s query, such as questions. (2) from the module itself, like demonstrations and instructions for a task-specific LLM. (3) a dynamic data flow that changes in the process, such as the response from the upstream module."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Generation Module",
|
| 27 |
+
"text": "Generation Module contains a Large Language Model for generating content according to according to the requirements. This module, allows users to load a large language model or use some LLM API for generating responses. The input of Generation Module is a natural language query and the output is a response from LLM. A Generation Module supports different frameworks, including huggingface, vllm, fastchat, and APIs like openai API to implement the generation according to the need. To fit into different pipelines, Generation Module can be called either iterative or direct."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Enhancing Module",
|
| 33 |
+
"text": "Works that use one or more external modules to enhance the quality of citations can be classified into four categories: retriever, planner, assessor, and editor, as shown in Table 1 ###reference_###. They can be used individually or collaboratively, providing sufficient flexibility for the construction of a citation generation pipeline."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.3.1",
|
| 37 |
+
"parent_section_id": "2.3",
|
| 38 |
+
"section_name": "2.3.1 Real Time retriever",
|
| 39 |
+
"text": "A Real-Time Retriever, utilized to retrieve external documents from a corpus, is helpful when LLMs find it difficult to attribute from existing retrieved documents. The input of the retriever is a query and the output contains some retrieved documents or chunks. In the design of Citekit , documents or chunks returned will be automatically added into the pipeline for later evaluation, with a unique index if needed. The retriever can not only retrieve knowledge by relevance, like using bm-25 or dense passage retrieval (Karpukhin et al., 2020 ###reference_b17###) but also get documents in the data store by an index or samples documents from LLMs or even the Generation Module itself (inner), as used in Recitation Augmented baseline asking LLMs to recite documents from training data."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3.2",
|
| 43 |
+
"parent_section_id": "2.3",
|
| 44 |
+
"section_name": "2.3.2 Ahead Planner",
|
| 45 |
+
"text": "Planners will process the query and documents in advance before it is sent to LLMs for generation. Taking the query and relevant documents as input, An ahead planner such as blueprint modules and attributers generate guidance that the Generation Module can follow to improve the citation quality. The generated information like plans or attributing routes serve to help Generation Module better understand and extract knowledge, while also making the answer more traceable."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.3.3",
|
| 49 |
+
"parent_section_id": "2.3",
|
| 50 |
+
"section_name": "2.3.3 Quality Feedbacker",
|
| 51 |
+
"text": "A Feedbacker, once defined and plugged into the pipeline, can automatically evaluate the initial answer in the process to guide the modules to generate a better response. The input contains the initial answer, and the output of a feedbacker may be a quantitative value (scorer), or the answer with the highest score by some pre-defined metrics (reranker). A special type of feedbacker, verifier, can also output the exact input but present a Boole value True or False for distinguished further process."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "2.3.4",
|
| 55 |
+
"parent_section_id": "2.3",
|
| 56 |
+
"section_name": "2.3.4 Output Editor",
|
| 57 |
+
"text": "An output editor can modify the response for a better citation or answer quality. The input of an editor contains an\nanswer and the output is a new answer edited using information from the data storage or the other feedback from the input. It can either revise the answer, like correct the factual problems in the answer, or modify the citation, including simplifying it to make it more precise (simplifier)."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "2.3.5",
|
| 61 |
+
"parent_section_id": "2.3",
|
| 62 |
+
"section_name": "2.3.5 Extensibility",
|
| 63 |
+
"text": "In addition to the predefined components in Enhancing Module , researchers can also create a new component by just setting a corresponding prompt template and a calling function that demonstrates the logic of execution. Any module that is inherited from the base class will have the ability to be connected to the pipeline and send output to the target module so the new module can be easily plugged into the pipeline."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "2.4",
|
| 67 |
+
"parent_section_id": "2",
|
| 68 |
+
"section_name": "Evaluator",
|
| 69 |
+
"text": "Evaluator is a module that evaluates or scores the output. If the Evaluator is plugged in, the output of the pipeline will be automatically sent to it, and other information such as reference answers will also be passed into it for evaluation. Evaluator have access to the initial corpus as well as the newly retrieved documents during the whole process. Finally, a result will be returned by the Evaluator .\nThere are some predefined metrics that can be set easily, such as ROUGE for answer quality and MAUVE for fluency, citation precision and recall in ALCE benchmark, a citation precision and recall metric with granularity for citation quality, and dataset-specific metrics for answer correctness (e.g. STR-EM for ASQA, claims for ELI5).\nManually defined other metrics are also possible once the evaluation function and the specific data in the pipeline for evaluation are defined, allowing users to implant an existing metric into a pipeline or a new one."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "2.5",
|
| 73 |
+
"parent_section_id": "2",
|
| 74 |
+
"section_name": "Pipeline",
|
| 75 |
+
"text": "A pipeline is a class for managing data flow and organizing different modules and corresponding components. It serves as a runner to start the citation generation process and control the input and output of modules contained in itself.\nFor data management, input (e.g. question to be answered, ground truth for evaluation) and documents retrieved are stored respectively as dictionaries with keys and values in the pipeline, and they are both accessible by modules.\nFor module organization, modules that are connected to the pipeline can take an input and generate an output, and the output will be sent to target a module by predefined conditions.\nCitekit offers more flexible options for more complicated pipelines. For instance, multiple responses can be sent to the next module in parallel for independent processes. They can also be sent iterative for sequential needs. Besides, modules that simply form a sequence can be connected in order and be used like a complete module.\nThe pipeline is also extensible, as users can plug in different modules. Different ways of connection will make the pipeline a sequence, a loop, a tree, or other structures."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Usage",
|
| 81 |
+
"text": ""
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.1",
|
| 85 |
+
"parent_section_id": "3",
|
| 86 |
+
"section_name": "Realization of SOTA method.",
|
| 87 |
+
"text": "To construct a citation generation pipeline with Citekit , users can use predefined recipes to define some preset modules and combine them. For Attribute First, then Generate pipeline, users can simply use a list to indicate the interconnection and the last module will output the answer.\nTo run the entire pipeline, the user should specify certain entries from the dataset as input, and designate document entries as the initially stored documents. As shown in Figure 3 ###reference_###, we use several lines of code to complete the method quickly."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "3.2",
|
| 91 |
+
"parent_section_id": "3",
|
| 92 |
+
"section_name": "Customized pipeline modification.",
|
| 93 |
+
"text": "The Attribute First, then Generate design uses only an ahead planner. If we want to extend the pipeline by plugging in a verifier to ask the Generation Module to regenerate with new retrieved documents if the statement is not entailed by the documents. We will add a loop after the initial output to Generation Module . Figure 4 ###reference_### shows an example of the efficiency of improving an existing method.\n###table_1###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Evaluation",
|
| 99 |
+
"text": ""
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.1",
|
| 103 |
+
"parent_section_id": "4",
|
| 104 |
+
"section_name": "Baselines and metrics",
|
| 105 |
+
"text": "We evaluate 11 baselines in total using the state-of-the-art open-source and closed-source LLMs, GPT-4o (OpenAI, 2024 ###reference_b25###) and Llama3-8B-Instruct (AI@Meta, 2024 ###reference_b2###) on ASQA dataset. ALCE Vanilla, Snippet, and Summ directly prompt the LLM to generate citations using full documents, snippets, and summaries respectively. ALCE Interact (Gao et al., 2023b ###reference_b7###) uses document summaries and interactively provides full documents. AAR Lee et al. (2024 ###reference_b18###) asked the LLM to revise the answer, while VTG Sun et al. (2024 ###reference_b29###) will verify the answer and retrieve more supplementary documents for regeneration. Citation Enhanced (Li et al., 2024b ###reference_b23###) method retrieves documents after generation, and Recitation Augmented (Sun et al., 2023 ###reference_b30###) sample documents from pre-training data. Attribute First, then Generate (Slobodkin et al., 2024 ###reference_b27###) and Blueprint Fierro et al. (2024 ###reference_b5###) provides some attributing spans or questions to guide the generation. For self-RAG (Asai et al., 2023 ###reference_b3###), we use our prompt version instead of a trained model to retrieve documents and generate sentence-by-sentence.\nWe use metrics from ALCE for evaluation, including fluency, correctness, rouge, citation recall, and precision. We also evaluate the appropriate citation rate and the citation granularity."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.2",
|
| 109 |
+
"parent_section_id": "4",
|
| 110 |
+
"section_name": "Our method",
|
| 111 |
+
"text": "We build our new pipeline, PEEP, combining the most efficient modules. We use a Planner to decompose the question in two as much as atomic questions (Yan et al., 2024 ###reference_b34###), and two Editors (reviser and simplifier) to combine the answers for each atomic question and simplify the citation afterward. We use Parallel generation to get answers with sufficient diversity, as shown in Figure 5 ###reference_###.\n###figure_3###"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.3",
|
| 115 |
+
"parent_section_id": "4",
|
| 116 |
+
"section_name": "Settings",
|
| 117 |
+
"text": "We set max generated tokens to 500 to avoid too long answers and use \\n as stop token. For Llama3-8B-Instruct, we use the model from huggingface and set the temperature to 0.5. and other configurations by default. For GPT-4o, we use the openai API. During our experiment, we used the same prompt for the two models.\nFor retrieving documents relevant to the query, we use 5 documents by default. However, for ALCE Summ, ALCE Snippet, and ALCE Interact, we use 10 documents as they show the short summaries and snippets from the documents. Citation Augmented and self-RAG use real-time retrievers instead of a fixed number of document inputs, and we configured our retrievers to return the top-1 document at a time.\nFor evaluation of citation quality, we adopt a TRUE model ###reference_nli_mixture### (Honovich et al., 2022 ###reference_b10###) to verify if the cited documents could entail the generated statement."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.4",
|
| 121 |
+
"parent_section_id": "4",
|
| 122 |
+
"section_name": "Results and analysis",
|
| 123 |
+
"text": "We show the full results on ASQA dataset in Figure 2 ###reference_###. We discuss the main results from the experiments below.\nA more advanced model performs better. GPT-4o is generally better than Llama3-8B-Instruct on both citation quality and answer correctness. The performance of citation generation can benefit from the enhanced capabilities of the base model.\nAhead planner can enhance the answer. A planner can enhance the answer on correctness, especially for a more powerful model. GPT-4o is more likely to achieve a better performance via planning.\nOutput editor can significantly improve the citation quality.\nAn output editor can improve citation recall and precision, as well as the correctness of citation prediction. While the ALCE Vanilla presents a citation precision and recall at about 50, a reranker in self-RAG can make the precision and recall achieve 80.\nEnhancing the granularity of citations is still a challenge.\nNearly all the baselines presented cite the full documents, resulting in a relatively low granularity. Citation generation based on summary and extraction(ALCE Summand ALCE Snippet) can cite only a snippet or a summary from the document, but it risks a loss of correctness.\nLLMs can cite internal knowledge better\nDespite the significant loss of answer quality and correctness, Llama3-8B-Instruct demonstrates better citation quality both on recall and precision when citing the documents sampled from itself, compared to the ALCE-Vanilla baseline that uses external knowledge, demonstrating the prospects of research in this area.\nConsidering both citation quality and answer correctness remains a challenge\nOur method significantly improves the overall quality of citations while only slightly sacrificing accuracy. It achieves the best balance among all the methods tested. However, the answer accuracy is still lower than the highest-performing method by 7.3 points. Additionally, the citation recall and precision barely exceed 80%. In practical applications, it is still difficult to gain readers\u2019 trust. We believe that thanks to the modularity and extensibility of Citekit , this issue will gradually be resolved."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5",
|
| 127 |
+
"parent_section_id": null,
|
| 128 |
+
"section_name": "Conclusion",
|
| 129 |
+
"text": "To unify different methods for LLM citation generation and to conduct a comprehensive and fair comparison of existing methods, we propose Citekit a user-friendly, modular, and extensible toolkit. We also present an instance to demonstrate the application of the toolkit, showing the usability and versatility of realizing citation generation pipelines. We conducted experiments on 11 different baselines and found that different modules excel in improving either the answer accuracy or the citation quality, and our approach achieves the best balance between answer accuracy and citation quality. However, generating a citation in fine-grained granularity is still challenging."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "6",
|
| 133 |
+
"parent_section_id": null,
|
| 134 |
+
"section_name": "Limitation",
|
| 135 |
+
"text": "There are still areas for improvement in our evaluation. (1) We only conduct our experiment on two LLMs, GPT-4o and Llama3-8B-Instruct. For other models, especially smaller ones, whether the different methods would still be effective in improving the performance of citation tasks is unknown. (2) Existing datasets are not designed for citation tasks. For instance, they do not take into account appropriate citation generation based on needs. Building datasets that reflect real citation generation scenarios remains an open problem."
|
| 136 |
+
}
|
| 137 |
+
],
|
| 138 |
+
"appendix": [
|
| 139 |
+
{
|
| 140 |
+
"section_id": "Appendix 1",
|
| 141 |
+
"parent_section_id": null,
|
| 142 |
+
"section_name": "Appendix A Implementation Details",
|
| 143 |
+
"text": "In this section, we describe the implementation details for different baselines. For other baselines, we follow the original prompts and the structure they provided, but for Blueprint and self-RAG, we use In-Context-Learning (ICL) instead of a trained model to complete the sub-task in their design.\nFor the Blueprint Model, we use the abstractive model to produce general-purpose questions: the paragraph is the input and the question is the output. We use prompts to make LLMs generate questions. ALCE provides question-answer pairs for ASQA dataset, and in each pair the sub-question shows an aspect of answering the final question. We use these pairs to complete a 2-shot prompt for ICL. For answer generation, we adjust the ALCE prompt to make LLMs answer all the subquestions. We show our prompts in Figure 10 ###reference_###, 11 ###reference_###\nAs for Llama3-8B and GPT-4o, there is no trained version for self-RAG, we use prompt to make the LLM retrieve documents and generate, then use an NLI model to evaluate if the document is supportive and the answer is useful, respectively in 3 segments. A reranker will find the best segment and the sentence is add to the answer. Similar to Attribute First, then Generate, We use generated sentences as prefix to complement the sentence-by-sentence iterative generation. We show our prompts in Figure 12 ###reference_###, 13 ###reference_###"
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"section_id": "Appendix 2",
|
| 147 |
+
"parent_section_id": null,
|
| 148 |
+
"section_name": "Appendix B Case Study",
|
| 149 |
+
"text": "We will describe how the answer quality could be improved by using a planner. In ASQA, the question is to some extent ambiguous and requires multiple short answers to cover different aspects. As in Figure 6 ###reference_###, a planner can give guidance by a blueprint or properly highlighting and clustering useful spans.\n[innerrightmargin=20pt]\nQuestion: Who was theprogrammer of MS-DOSoperating system?ALCE-Vanilla:Answer:Tim Paterson was the programmer of MS-DOSoperating system [1]. He developed 86-DOS,which later formed the basis of MS-DOS [1].MS-DOS was a renamed form of 86-DOS, writtenby Tim Paterson [3].Attribute First, then Generate:Attributer:1.Document[1]:...MS-DOS was a renamed form of 86-DOS ownedby Seattle Computer Products, written by TimPaterson...2.Document[1]:Paterson was educated in ...Document[4]:Microsoft purchased 86-DOS, allegedly for$50,000 ...Answer:Tim Paterson is the programmer who developedthe MS-DOS operating system, which wasinitially named 86-DOS [1]. Tim Paterson, thedeveloper of 86-DOS, is credited with creatingthe operating system, which was later renamedMS-DOS, and was purchased by Microsoft for$50,000 [1][4].Attribute First, then Generate:questions:Who was the programmer of 86-dos operatingsystem? Which company was the programmer ofMS-DOS operating system? ...Answer:Tim Paterson created 86-DOS, which laterformed the basis of MS-DOS [1]. MS-DOS is anoperating system for x86-based personal compu-ters mostly developed by Microsoft[4].\nAlthough we ask the model to cite a minimum set of documents, LLMs still tend to overcite. Since the ASQA dataset contains rare multi-hop questions, most of the statements only need one document as a citation. An editor, such as a simplifier can remove redundant citations. Figure 7 ###reference_### shows the answer before and after simplification in VTG.\n[innerrightmargin=20pt]\nQuestion: When was the first Apple i phonemade?Documents: Document[1]: ...was later released in the United States on June 29, 2007...Document[2]: ...the first iPhone would be released later that year. On June 29, 2007...VTG before removal:Answer:The first Apple iPhone was released on June29, 2007 [1][2].VTG after removal:Answer:The first Apple iPhone was released on June29, 2007 [2].\nTo improve granularity, the answer should cite the minimum number of spans from the documents. Most of the methods use document-level citation, and in our metrics of granularity, we assume all the spans in one document are cited for document-level citation. In ALCE-Snippet, LLM only cites a snippet from the document, hence a high score of granularity. Figure 8 ###reference_### shows how ALCE Vanillaworks to cite a span, not a document.\n[innerrightmargin=20pt]\nDocuments: ...Document [3]: (Title:The Sound of Silence)...Notes Bibliography The Sound of Silence \"TheSound of Silence\", originally \"The Sounds of Silence\", is a song by the American music duo Simon & Garfunkel. The song was written by......Provided Snippets: ...Document [3]: (Title:The Sound of Silence)\"TheSound of Silence\", originally \"The Sounds of Silence\", is a song by the American music duoSimon & Garfunkel....Answer:... The song was originally titled \"The Sounds of Silence\" [3]. ...\nIn figure 9 ###reference_###, we show an example of all the answers generated by different methods.\n###table_2### In this task, you should write no more than four subquestions according to the given documents and a question. Ensure that each subquestion can be respond by reading the documents, and is related to the question. Write then in only one paragraph. Question: Who is the original artist of sound of silence? Document [1]: Sounds of Silence is the second studio album by Simon & Garfunkel, released on January 17, 1966. The album\u2019s title is a slight modification of the title of the duo\u2019s first major hit, \"The Sound of Silence\", which originally was released as \"The Sounds of Silence\". The song had earlier been released in an acoustic version on the album \"Wednesday Morning, 3 A.M.\", and later on the soundtrack to the movie \"The Graduate\". Document [2]: Sound of Silence\" is a song performed by Australian recording artist Dami Im. Written by Anthony Egizii and David Musumeci of DNA Songs, it is best known as Australia\u2019s entry at the Eurovision Song Contest 2016 which was held in Stockholm, Sweden, where it finished 2nd, receiving a total of 511 points. Document [3]: Simon & Garfunkel Simon & Garfunkel were an American folk rock duo consisting of singer-songwriter Paul Simon and singer Art Garfunkel. They were one of the bestselling music groups of the 1960s and became counterculture icons of the decade\u2019s social revolution, alongside artists such as the Beatles, the Beach Boys, and Bob Dylan. Their biggest hits\\u2014including \"The Sound of Silence\" (1964), \"Mrs. Robinson\" (1968) Sub-questions: Who is the original artist of sound of silence, the album? Who is the original artist of sound of silence, the song, released in 2016? Who is the original artist of sound of silence, the song, released in 1964?\" In this task, you should write no more than four subquestions according to the given documents and a question. Ensure that each subquestion can be respond by reading the documents, and is related to the question. Write then in only one paragraph. Question: ... Document[1]: ... Document[2]: ... Document[3]: ... Sub-questions:\nInstruction: Write an accurate, engaging, and concise answer for the given question using only the provided search results (some of which might be irrelevant) and cite them properly by answering all the subquestions. Each subquestion should be answered. Use an unbiased and journalistic tone. Always cite for any factual claim. When citing several search results, use [1][2][3]. Cite at least one document and at most three documents in each sentence. If multiple documents support the sentence, only cite a minimum sufficient subset of the documents.\nIn this task, you will be given a question, and you should generate a query to find relevent documents to help generating the answer. You may be given some sentences that have been generated as context, you should try to find documents that could support another claim other than sentences generated but still relevent to the question. Given the original question: Who has the highest goals in world football? Please generate one query to help find relevent documents, the query is: Answer: \"Top goal scorer in world football 2021\"\nInstruction: Write only a sentence as an accurate, engaging, and concise answer for the given question using only the provided search result. Use an unbiased and journalistic tone. Question:Who has the highest goals in world football? Prefix:Pel\u00e9 holds the record for the highest number of goals in world football, with 1281 goals recognized by FIFA[1]. Document [4](Title:Wartan Ghazarian)goals (4 in World Cup qualifiers, 3 in Asian Cup qualifiers, 12 in friendlies). His record was later broken by Roda Antar, after Roda scored his 20th goal in 2018 FIFA World Cup qualification match against Laos. On 16 November 2008, during Round 6 of the Lebanese Football League, at the age of 39 years, Vartan scored his 130th goal in the Lebanese first division against Tadamon Tyre, becoming officially the highest all-time scorer in the history of Lebanese football. Some officials do not recognize the 12 goals he scored in the 2000\u20132001 season which was canceled. However, his remaining Answer:\nInstruction: You will be presented with a snippet from documents. Write only a sentence as an accurate, engaging, and concise answer for the given question using only the provided snippets. Use an unbiased and journalistic tone. Question:Who has the highest goals in world football? Prefix:Pel\u00e9 holds the record for the highest number of goals in world football, with 1281 goals recognized by FIFA[1]. Document [4](Title:Wartan Ghazarian)His record was later broken by Roda Antar, after Roda scored his 20th goal in 2018 FIFA World Cup qualification match against Laos. Answer:"
|
| 150 |
+
}
|
| 151 |
+
],
|
| 152 |
+
"tables": {
|
| 153 |
+
"1": {
|
| 154 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.3\" style=\"width:433.6pt;height:419.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(66.3pt,-64.2pt) scale(1.44006419897629,1.44006419897629) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.3.3\">\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.4\">\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S2.T1.3.3.4.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S2.T1.3.3.4.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.3.4.2.1\">Feedbacker</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S2.T1.3.3.4.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.3.4.3.1\">Retriever</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S2.T1.3.3.4.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.3.4.4.1\">Planner</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S2.T1.3.3.4.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.3.4.5.1\">Editor</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.5.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.3.3.5.1.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.3.3.5.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.3.3.5.1.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.3.3.5.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.3.3.5.1.2.1.1.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">ALCE</span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.3.3.5.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.3.3.5.1.2.1.2.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">Vanilla</span></span>\n</span></span><span class=\"ltx_text\" id=\"S2.T1.3.3.5.1.3\"></span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.5.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.5.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.5.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.5.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.6.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.3.3.6.1.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.3.3.6.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.3.3.6.1.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.3.3.6.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.3.3.6.1.2.1.1.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">ALCE</span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.3.3.6.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.3.3.6.1.2.1.2.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">Rerank</span></span>\n</span></span><span class=\"ltx_text\" id=\"S2.T1.3.3.6.1.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.6.2\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.6.2.1\" style=\"background-color:#D4D4FF;\">reranker</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.6.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.6.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.6.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.1.1.1.1.2\"></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.1.1.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.1.1.1.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.1.1.1.1.1.1.2.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">ALCE</span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.1.1.1.1.1.1.1.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">Interact<sup class=\"ltx_sup\" id=\"S2.T1.1.1.1.1.1.1.1.1.1\">\u2217</sup></span></span>\n</span></span><span class=\"ltx_text\" id=\"S2.T1.1.1.1.1.3\"></span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1.3\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.1.1.1.3.1\" style=\"background-color:#D4D4FF;\">summary</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">AnG<sup class=\"ltx_sup\" id=\"S2.T1.2.2.2.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.4\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.2.2.2.4.1\" style=\"background-color:#D4D4FF;\">attributer</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.7.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">Bluprint</td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.7.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.7.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.7.4\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.7.4.1\" style=\"background-color:#D4D4FF;\">blueprint</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.7.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.8.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">AAR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.8.2\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.8.2.1\" style=\"background-color:#D4D4FF;\">scorer</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.8.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.8.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.8.5\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.8.5.1\" style=\"background-color:#D4D4FF;\">reviser</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.9.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.3.3.9.1.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.3.3.9.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.3.3.9.1.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.3.3.9.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.3.3.9.1.2.1.1.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">Citation</span></span>\n<span class=\"ltx_tr\" id=\"S2.T1.3.3.9.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.3.3.9.1.2.1.2.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">Augmented</span></span>\n</span></span><span class=\"ltx_text\" id=\"S2.T1.3.3.9.1.3\"></span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.9.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.9.3\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.9.3.1\" style=\"background-color:#D4D4FF;\">relevance</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.9.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.9.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.10.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">VTG</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.10.2\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.10.2.1\" style=\"background-color:#D4D4FF;\">verifier</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.10.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.10.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.10.5\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.10.5.1\" style=\"background-color:#D4D4FF;\">simplifier</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.11.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">recitation</td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.11.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.11.3\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.11.3.1\" style=\"background-color:#D4D4FF;\">inner</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.11.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.11.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">self-RAG<sup class=\"ltx_sup\" id=\"S2.T1.3.3.3.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.2\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.3.2.1\" style=\"background-color:#D4D4FF;\">reranker</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.3\" style=\"background-color:#D4D4FF;padding-top:0.5pt;padding-bottom:0.5pt;\"><span class=\"ltx_text\" id=\"S2.T1.3.3.3.3.1\" style=\"background-color:#D4D4FF;\">relevance</span></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.3.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>\nThe usage of different modules and ways of generation in our baselines. <sup class=\"ltx_sup\" id=\"S2.T1.8.1\">\u2217</sup> Methods marked with an asterisk use iterative <span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S2.T1.9.2\">Generation Module</span> while others use direct.\n</figcaption>\n</figure>",
|
| 155 |
+
"capture": "Table 1: \nThe usage of different modules and ways of generation in our baselines. \u2217 Methods marked with an asterisk use iterative Generation Module while others use direct.\n"
|
| 156 |
+
},
|
| 157 |
+
"2": {
|
| 158 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T2.1\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.2\">\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S3.T2.1.2.1\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T2.1.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.2.3.1\">Fluency</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.2.4.1\">Correct.</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S3.T2.1.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.2.5.1\">Citation</span></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T2.1.2.6\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T2.1.2.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.3\">\n<td class=\"ltx_td ltx_border_r\" id=\"S3.T2.1.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.2\">Model</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.3.3\">(MAUVE)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.3.4\">(EM Rec.)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.3.5\">Rec.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.3.6\">Prec.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.3.7\">App.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.3.8\">Gran.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.9\">ROUGE-L</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.10\">Length</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.4.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S3.T2.1.4.1.1.1\">\n<span class=\"ltx_p\" id=\"S3.T2.1.4.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.4.1.1.1.1.1\">ALCE</span></span>\n<span class=\"ltx_p\" id=\"S3.T2.1.4.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.4.1.1.1.2.1\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T2.1.4.1.1.1.2.2\">Vanilla</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.3\">66.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.4\">40.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.5\">47.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.6\">53.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.7\">80.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.8\">22.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.9\">28.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.10\">72.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.2\">72.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.3\">41.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.4\">59.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.5\">61.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.6\">70.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.7\">19.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.8\">32.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.5.9\">41.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.6.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.6.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S3.T2.1.6.1.1.1\">\n<span class=\"ltx_p\" id=\"S3.T2.1.6.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.6.1.1.1.1.1\">ALCE</span></span>\n<span class=\"ltx_p\" id=\"S3.T2.1.6.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.6.1.1.1.2.1\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T2.1.6.1.1.1.2.2\">Summ</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.3\">80.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.4\">40.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.5\">59.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.6\">66.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.7\">80.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.8\">59.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.9\">27.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.6.10\">69.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.2\">72.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.3\">42.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.4\">59.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.5\">61.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.6\">82.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.7\">54.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.8\">32.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.7.9\">41.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.8.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.8.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S3.T2.1.8.1.1.1\">\n<span class=\"ltx_p\" id=\"S3.T2.1.8.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.8.1.1.1.1.1\">ALCE</span></span>\n<span class=\"ltx_p\" id=\"S3.T2.1.8.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.8.1.1.1.2.1\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T2.1.8.1.1.1.2.2\">Snippet</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.3\">69.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.4\">38.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.5\">56.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.6\">60.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.7\">81.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.8\">65.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.9\">27.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.8.10\">65.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.2\">79.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.3\">37.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.4\">77.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.5\">66.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.6\">85.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.7\">58.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.8\">30.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.9.9\">26.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.10.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.10.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S3.T2.1.10.1.1.1\">\n<span class=\"ltx_p\" id=\"S3.T2.1.10.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.10.1.1.1.1.1\">ALCE</span></span>\n<span class=\"ltx_p\" id=\"S3.T2.1.10.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.10.1.1.1.2.1\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T2.1.10.1.1.1.2.2\">Interact</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.3\">68.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.4\">30.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.5\">30.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.6\">56.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.7\">84.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.8\">17.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.9\">21.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.10.10\">56.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.2\">72.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.3\">39.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.4\">41.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.5\">45.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.6\">72.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.7\">12.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.8\">30.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.11.9\">67.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.12.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.12.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S3.T2.1.12.1.1.1\">\n<span class=\"ltx_p\" id=\"S3.T2.1.12.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.12.1.1.1.1.1\">Attribute,</span></span>\n<span class=\"ltx_p\" id=\"S3.T2.1.12.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.12.1.1.1.2.1\">then Generate</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.3\">70.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.4\">38.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.5\">49.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.6\">42.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.7\">78.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.8\">22.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.9\">27.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.12.10\">89.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.2\">75.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.3\">41.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.4\">63.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.5\">42.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.6\">87.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.7\">19.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.8\">24.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.13.9\">61.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.14.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.14.1.1\">AAR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.3\">69.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.4\">38.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.5\">37.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.6\">47.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.7\">74.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.8\">28.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.9\">27.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.14.10\">122.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.2\">72.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.3\">46.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.4\">52.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.5\">58.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.6\">77.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.7\">20.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.8\">31.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.15.9\">59.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.16.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.16.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S3.T2.1.16.1.1.1\">\n<span class=\"ltx_p\" id=\"S3.T2.1.16.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.16.1.1.1.1.1\">Citation</span></span>\n<span class=\"ltx_p\" id=\"S3.T2.1.16.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.16.1.1.1.2.1\">Enhanced</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.3\">59.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.4\">31.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.5\">30.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.6\">40.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.7\">54.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.8\">27.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.9\">24.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.16.10\">48.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.17\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.2\">65.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.3\">41.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.4\">49.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.5\">52.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.6\">55.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.7\">27.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.8\">29.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.17.9\">40.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.18.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.18.1.1\">VTG</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.3\">74.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.4\">41.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.5\">73.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.6\">73.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.7\">87.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.8\">27.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.9\">42.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.18.10\">45.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.19\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.2\">75.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.3\">42.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.4\">83.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.5\">82.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.6\">88.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.7\">29.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.8\">39.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.19.9\">45.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.20.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.20.1.1\">Blueprint</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.3\">70.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.4\">40.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.5\">68.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.6\">71.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.7\">87.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.8\">22.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.9\">31.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.20.10\">75.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.21\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.2\">78.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.3\">41.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.4\">68.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.5\">83.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.6\">83.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.7\">19.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.8\">27.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.21.9\">75.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.22\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.22.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.22.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S3.T2.1.22.1.1.1\">\n<span class=\"ltx_p\" id=\"S3.T2.1.22.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.22.1.1.1.1.1\">Recitation</span></span>\n<span class=\"ltx_p\" id=\"S3.T2.1.22.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.22.1.1.1.2.1\">Augmented</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.3\">61.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.4\">33.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.5\">47.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.6\">55.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.7\">62.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.8\">14.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.9\">34.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.22.10\">129</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.1\">GPT-4o<sup class=\"ltx_sup\" id=\"S3.T2.1.1.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.2\">/</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.3\">/</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4\">/</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5\">/</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6\">/</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7\">/</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8\">/</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9\">/</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.23\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.23.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.23.1.1\">Self-RAG</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.3\">68.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.4\">35.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.5\">82.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.6\">80.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.7\">88.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.8\">28.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.9\">27.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.23.10\">52.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.24\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.2\">70.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.3\">37.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.4\">81.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.5\">83.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.6\">84.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.7\">26.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.8\">27.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.24.9\">40.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.25\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T2.1.25.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.25.1.1\">PEEP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.2\">llama3-8B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.3\">68.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.4\">38.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.5\">78.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.6\">79.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.7\">82.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.8\">66.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.9\">24.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.25.10\">59.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.26\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.1\">GPT-4o</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.2\">75.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.3\">41.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.4\">83.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.5\">85.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.6\">86.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.7\">65.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.8\">28.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.1.26.9\">66.7</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>\nASQA results. <sup class=\"ltx_sup\" id=\"S3.T2.5.1\">\u2217</sup>In recitation-augmented baseline, we only use Llama3-8B-Instruct because we found GPT-4o is too reluctant to recite verbatim documents in training data.</figcaption>\n</figure>",
|
| 159 |
+
"capture": "Table 2: \nASQA results. \u2217In recitation-augmented baseline, we only use Llama3-8B-Instruct because we found GPT-4o is too reluctant to recite verbatim documents in training data."
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
"image_paths": {
|
| 163 |
+
"1": {
|
| 164 |
+
"figure_path": "2408.04662v2_figure_1.png",
|
| 165 |
+
"caption": "Figure 1: Illustration of Citation Task. An answer without citation makes readers confused about the actual timeline, but if citations are included, they can understand how the details in the answer actually make sense.",
|
| 166 |
+
"url": "http://arxiv.org/html/2408.04662v2/x1.png"
|
| 167 |
+
},
|
| 168 |
+
"2": {
|
| 169 |
+
"figure_path": "2408.04662v2_figure_2.png",
|
| 170 |
+
"caption": "Figure 2: The modular design of Citekit . On the left, we show four main modules in Citekit and how they interact with other modules, as well as some predefined components and their abilities; on the right, we illustrate three baseline implementations in our framework and show the data flow during the running of their pipelines",
|
| 171 |
+
"url": "http://arxiv.org/html/2408.04662v2/x2.png"
|
| 172 |
+
},
|
| 173 |
+
"5": {
|
| 174 |
+
"figure_path": "2408.04662v2_figure_5.png",
|
| 175 |
+
"caption": "Figure 5: Design of PEEP. We show an example of generating a comprehensive answer for an ASQA question.",
|
| 176 |
+
"url": "http://arxiv.org/html/2408.04662v2/x3.png"
|
| 177 |
+
}
|
| 178 |
+
},
|
| 179 |
+
"validation": true,
|
| 180 |
+
"references": [
|
| 181 |
+
{
|
| 182 |
+
"1": {
|
| 183 |
+
"title": "Evaluation of attribution bias in retrieval-augmented large language models.",
|
| 184 |
+
"author": "Amin Abolghasemi, Leif Azzopardi, Seyyed Hadi Hashemi, Maarten de Rijke, and Suzan Verberne. 2024.",
|
| 185 |
+
"venue": "Preprint, arXiv:2410.12380.",
|
| 186 |
+
"url": "https://arxiv.org/abs/2410.12380"
|
| 187 |
+
}
|
| 188 |
+
},
|
| 189 |
+
{
|
| 190 |
+
"2": {
|
| 191 |
+
"title": "Llama 3 model card.",
|
| 192 |
+
"author": "AI@Meta. 2024.",
|
| 193 |
+
"venue": null,
|
| 194 |
+
"url": "https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md"
|
| 195 |
+
}
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"3": {
|
| 199 |
+
"title": "Self-rag: Learning to retrieve, generate, and critique through self-reflection.",
|
| 200 |
+
"author": "Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023.",
|
| 201 |
+
"venue": "Preprint, arXiv:2310.11511.",
|
| 202 |
+
"url": "https://arxiv.org/abs/2310.11511"
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"4": {
|
| 207 |
+
"title": "Localizing factual inconsistencies in attributable text generation.",
|
| 208 |
+
"author": "Arie Cattan, Paul Roit, Shiyue Zhang, David Wan, Roee Aharoni, Idan Szpektor, Mohit Bansal, and Ido Dagan. 2024.",
|
| 209 |
+
"venue": "Preprint, arXiv:2410.07473.",
|
| 210 |
+
"url": "https://arxiv.org/abs/2410.07473"
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"5": {
|
| 215 |
+
"title": "Learning to plan and generate text with citations.",
|
| 216 |
+
"author": "Constanza Fierro, Reinald Kim Amplayo, Fantine Huot, Nicola De Cao, Joshua Maynez, Shashi Narayan, and Mirella Lapata. 2024.",
|
| 217 |
+
"venue": null,
|
| 218 |
+
"url": "https://openreview.net/forum?id=6NEJ0ReNzr"
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"6": {
|
| 223 |
+
"title": "RARR: Researching and revising what language models say, using language models.",
|
| 224 |
+
"author": "Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a.",
|
| 225 |
+
"venue": "In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477\u201316508, Toronto, Canada. Association for Computational Linguistics.",
|
| 226 |
+
"url": "https://doi.org/10.18653/v1/2023.acl-long.910"
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"7": {
|
| 231 |
+
"title": "Enabling large language models to generate text with citations.",
|
| 232 |
+
"author": "Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b.",
|
| 233 |
+
"venue": "Preprint, arXiv:2305.14627.",
|
| 234 |
+
"url": "https://arxiv.org/abs/2305.14627"
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"8": {
|
| 239 |
+
"title": "Retrieval-augmented generation for large language models: A survey.",
|
| 240 |
+
"author": "Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, and Haofen Wang. 2024.",
|
| 241 |
+
"venue": "Preprint, arXiv:2312.10997.",
|
| 242 |
+
"url": "https://arxiv.org/abs/2312.10997"
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"9": {
|
| 247 |
+
"title": "Constructing a multi-hop QA dataset for comprehensive evaluation of reasoning steps.",
|
| 248 |
+
"author": "Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020.",
|
| 249 |
+
"venue": "In Proceedings of the 28th International Conference on Computational Linguistics, pages 6609\u20136625, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
|
| 250 |
+
"url": "https://doi.org/10.18653/v1/2020.coling-main.580"
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"10": {
|
| 255 |
+
"title": "TRUE: Re-evaluating factual consistency evaluation.",
|
| 256 |
+
"author": "Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022.",
|
| 257 |
+
"venue": "In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3905\u20133920, Seattle, United States. Association for Computational Linguistics.",
|
| 258 |
+
"url": "https://doi.org/10.18653/v1/2022.naacl-main.287"
|
| 259 |
+
}
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"11": {
|
| 263 |
+
"title": "Training language models to generate text with citations via fine-grained rewards.",
|
| 264 |
+
"author": "Chengyu Huang, Zeqiu Wu, Yushi Hu, and Wenya Wang. 2024a.",
|
| 265 |
+
"venue": "Preprint, arXiv:2402.04315.",
|
| 266 |
+
"url": "https://arxiv.org/abs/2402.04315"
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"12": {
|
| 271 |
+
"title": "Learning fine-grained grounded citations for attributed large language models.",
|
| 272 |
+
"author": "Lei Huang, Xiaocheng Feng, Weitao Ma, Yuxuan Gu, Weihong Zhong, Xiachong Feng, Weijiang Yu, Weihua Peng, Duyu Tang, Dandan Tu, and Bing Qin. 2024b.",
|
| 273 |
+
"venue": "Preprint, arXiv:2408.04568.",
|
| 274 |
+
"url": "https://arxiv.org/abs/2408.04568"
|
| 275 |
+
}
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"13": {
|
| 279 |
+
"title": "Advancing large language model attribution through self-improving.",
|
| 280 |
+
"author": "Lei Huang, Xiaocheng Feng, Weitao Ma, Liang Zhao, Yuchun Fan, Weihong Zhong, Dongliang Xu, Qing Yang, Hongtao Liu, and Bing Qin. 2024c.",
|
| 281 |
+
"venue": "Preprint, arXiv:2410.13298.",
|
| 282 |
+
"url": "https://arxiv.org/abs/2410.13298"
|
| 283 |
+
}
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"14": {
|
| 287 |
+
"title": "A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions.",
|
| 288 |
+
"author": "Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023.",
|
| 289 |
+
"venue": "Preprint, arXiv:2311.05232.",
|
| 290 |
+
"url": "https://arxiv.org/abs/2311.05232"
|
| 291 |
+
}
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"15": {
|
| 295 |
+
"title": "1-pager: One pass answer generation and evidence retrieval.",
|
| 296 |
+
"author": "Palak Jain, Livio Baldini Soares, and Tom Kwiatkowski. 2023.",
|
| 297 |
+
"venue": "Preprint, arXiv:2310.16568.",
|
| 298 |
+
"url": "https://arxiv.org/abs/2310.16568"
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"16": {
|
| 303 |
+
"title": "Evaluating open-domain question answering in the era of large language models.",
|
| 304 |
+
"author": "Ehsan Kamalloo, Nouha Dziri, Charles L. A. Clarke, and Davood Rafiei. 2023.",
|
| 305 |
+
"venue": "Preprint, arXiv:2305.06984.",
|
| 306 |
+
"url": "https://arxiv.org/abs/2305.06984"
|
| 307 |
+
}
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"17": {
|
| 311 |
+
"title": "Dense passage retrieval for open-domain question answering.",
|
| 312 |
+
"author": "Vladimir Karpukhin, Barlas O\u011fuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. 2020.",
|
| 313 |
+
"venue": "Preprint, arXiv:2004.04906.",
|
| 314 |
+
"url": "https://arxiv.org/abs/2004.04906"
|
| 315 |
+
}
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"18": {
|
| 319 |
+
"title": "Ask, assess, and refine: Rectifying factual consistency and hallucination in LLMs with metric-guided feedback learning.",
|
| 320 |
+
"author": "Dongyub Lee, Eunhwan Park, Hodong Lee, and Heuiseok Lim. 2024.",
|
| 321 |
+
"venue": "In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2422\u20132433, St. Julian\u2019s, Malta. Association for Computational Linguistics.",
|
| 322 |
+
"url": "https://aclanthology.org/2024.eacl-long.149"
|
| 323 |
+
}
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"19": {
|
| 327 |
+
"title": "Towards reliable and fluent large language models: Incorporating feedback learning loops in qa systems.",
|
| 328 |
+
"author": "Dongyub Lee, Taesun Whang, Chanhee Lee, and Heuiseok Lim. 2023.",
|
| 329 |
+
"venue": "Preprint, arXiv:2309.06384.",
|
| 330 |
+
"url": "https://arxiv.org/abs/2309.06384"
|
| 331 |
+
}
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"20": {
|
| 335 |
+
"title": "Retrieval-augmented generation for knowledge-intensive nlp tasks.",
|
| 336 |
+
"author": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen tau Yih, Tim Rockt\u00e4schel, Sebastian Riedel, and Douwe Kiela. 2021.",
|
| 337 |
+
"venue": "Preprint, arXiv:2005.11401.",
|
| 338 |
+
"url": "https://arxiv.org/abs/2005.11401"
|
| 339 |
+
}
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"21": {
|
| 343 |
+
"title": "Improving attributed text generation of large language models via preference learning.",
|
| 344 |
+
"author": "Dongfang Li, Zetian Sun, Baotian Hu, Zhenyu Liu, Xinshuo Hu, Xuebo Liu, and Min Zhang. 2024a.",
|
| 345 |
+
"venue": "Preprint, arXiv:2403.18381.",
|
| 346 |
+
"url": "https://arxiv.org/abs/2403.18381"
|
| 347 |
+
}
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"22": {
|
| 351 |
+
"title": "A survey of large language models attribution.",
|
| 352 |
+
"author": "Dongfang Li, Zetian Sun, Xinshuo Hu, Zhenyu Liu, Ziyang Chen, Baotian Hu, Aiguo Wu, and Min Zhang. 2023.",
|
| 353 |
+
"venue": "Preprint, arXiv:2311.03731.",
|
| 354 |
+
"url": "https://arxiv.org/abs/2311.03731"
|
| 355 |
+
}
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"23": {
|
| 359 |
+
"title": "Citation-enhanced generation for llm-based chatbots.",
|
| 360 |
+
"author": "Weitao Li, Junkai Li, Weizhi Ma, and Yang Liu. 2024b.",
|
| 361 |
+
"venue": "Preprint, arXiv:2402.16063.",
|
| 362 |
+
"url": "https://arxiv.org/abs/2402.16063"
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"24": {
|
| 367 |
+
"title": "AmbigQA: Answering ambiguous open-domain questions.",
|
| 368 |
+
"author": "Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020.",
|
| 369 |
+
"venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5783\u20135797, Online. Association for Computational Linguistics.",
|
| 370 |
+
"url": "https://doi.org/10.18653/v1/2020.emnlp-main.466"
|
| 371 |
+
}
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"25": {
|
| 375 |
+
"title": "Gpt-4 technical report.",
|
| 376 |
+
"author": "OpenAI. 2024.",
|
| 377 |
+
"venue": "Preprint, arXiv:2303.08774.",
|
| 378 |
+
"url": "https://arxiv.org/abs/2303.08774"
|
| 379 |
+
}
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"26": {
|
| 383 |
+
"title": "On the capacity of citation generation by large language models.",
|
| 384 |
+
"author": "Haosheng Qian, Yixing Fan, Ruqing Zhang, and Jiafeng Guo. 2024.",
|
| 385 |
+
"venue": "Preprint, arXiv:2410.11217.",
|
| 386 |
+
"url": "https://arxiv.org/abs/2410.11217"
|
| 387 |
+
}
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"27": {
|
| 391 |
+
"title": "Attribute first, then generate: Locally-attributable grounded text generation.",
|
| 392 |
+
"author": "Aviv Slobodkin, Eran Hirsch, Arie Cattan, Tal Schuster, and Ido Dagan. 2024.",
|
| 393 |
+
"venue": "Preprint, arXiv:2403.17104.",
|
| 394 |
+
"url": "https://arxiv.org/abs/2403.17104"
|
| 395 |
+
}
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"28": {
|
| 399 |
+
"title": "Asqa: Factoid questions meet long-form answers.",
|
| 400 |
+
"author": "Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, and Ming-Wei Chang. 2023.",
|
| 401 |
+
"venue": "Preprint, arXiv:2204.06092.",
|
| 402 |
+
"url": "https://arxiv.org/abs/2204.06092"
|
| 403 |
+
}
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"29": {
|
| 407 |
+
"title": "Towards verifiable text generation with evolving memory and self-reflection.",
|
| 408 |
+
"author": "Hao Sun, Hengyi Cai, Bo Wang, Yingyan Hou, Xiaochi Wei, Shuaiqiang Wang, Yan Zhang, and Dawei Yin. 2024.",
|
| 409 |
+
"venue": "Preprint, arXiv:2312.09075.",
|
| 410 |
+
"url": "https://arxiv.org/abs/2312.09075"
|
| 411 |
+
}
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"30": {
|
| 415 |
+
"title": "Recitation-augmented language models.",
|
| 416 |
+
"author": "Zhiqing Sun, Xuezhi Wang, Yi Tay, Yiming Yang, and Denny Zhou. 2023.",
|
| 417 |
+
"venue": "Preprint, arXiv:2210.01296.",
|
| 418 |
+
"url": "https://arxiv.org/abs/2210.01296"
|
| 419 |
+
}
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"31": {
|
| 423 |
+
"title": "CommonsenseQA: A question answering challenge targeting commonsense knowledge.",
|
| 424 |
+
"author": "Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019.",
|
| 425 |
+
"venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149\u20134158, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 426 |
+
"url": "https://doi.org/10.18653/v1/N19-1421"
|
| 427 |
+
}
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"32": {
|
| 431 |
+
"title": "Search-in-the-chain: Interactively enhancing large language models with search for knowledge-intensive tasks.",
|
| 432 |
+
"author": "Shicheng Xu, Liang Pang, Huawei Shen, Xueqi Cheng, and Tat-Seng Chua. 2024a.",
|
| 433 |
+
"venue": "Preprint, arXiv:2304.14732.",
|
| 434 |
+
"url": "https://arxiv.org/abs/2304.14732"
|
| 435 |
+
}
|
| 436 |
+
},
|
| 437 |
+
{
|
| 438 |
+
"33": {
|
| 439 |
+
"title": "Hallucination is inevitable: An innate limitation of large language models.",
|
| 440 |
+
"author": "Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. 2024b.",
|
| 441 |
+
"venue": "Preprint, arXiv:2401.11817.",
|
| 442 |
+
"url": "https://arxiv.org/abs/2401.11817"
|
| 443 |
+
}
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"34": {
|
| 447 |
+
"title": "Atomic fact decomposition helps attributed question answering.",
|
| 448 |
+
"author": "Zhichao Yan, Jiapu Wang, Jiaoyan Chen, Xiaoli Li, Ru Li, and Jeff Z. Pan. 2024.",
|
| 449 |
+
"venue": "Preprint, arXiv:2410.16708.",
|
| 450 |
+
"url": "https://arxiv.org/abs/2410.16708"
|
| 451 |
+
}
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"35": {
|
| 455 |
+
"title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering.",
|
| 456 |
+
"author": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018.",
|
| 457 |
+
"venue": "Preprint, arXiv:1809.09600.",
|
| 458 |
+
"url": "https://arxiv.org/abs/1809.09600"
|
| 459 |
+
}
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"36": {
|
| 463 |
+
"title": "Effective large language model adaptation for improved grounding and citation generation.",
|
| 464 |
+
"author": "Xi Ye, Ruoxi Sun, Sercan \u00d6. Arik, and Tomas Pfister. 2024.",
|
| 465 |
+
"venue": "Preprint, arXiv:2311.09533.",
|
| 466 |
+
"url": "https://arxiv.org/abs/2311.09533"
|
| 467 |
+
}
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"37": {
|
| 471 |
+
"title": "Verifiable by design: Aligning language models to quote from pre-training data.",
|
| 472 |
+
"author": "Jingyu Zhang, Marc Marone, Tianjian Li, Benjamin Van Durme, and Daniel Khashabi. 2024.",
|
| 473 |
+
"venue": "Preprint, arXiv:2404.03862.",
|
| 474 |
+
"url": "https://arxiv.org/abs/2404.03862"
|
| 475 |
+
}
|
| 476 |
+
}
|
| 477 |
+
],
|
| 478 |
+
"url": "http://arxiv.org/html/2408.04662v2"
|
| 479 |
+
}
|
20241217/2408.13854v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2409.09739v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2409.09777v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2409.10033v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2409.11404v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2409.12468v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20241217/2409.13474v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|